Task:
Create a simple API endpoint using the Webhook and Respond to Webhook nodes
Why:
You can prototype or replace a backend process with a single workflow
Main use cases:
Replace backend logic with a workflow
Task:
Merge two datasets into one based on matching rules
Why:
A powerful capability of n8n is to easily branch out the workflow in order to process different datasets. Even more powerful is the ability to join them back together with SQL-like joining logic.
Main use cases:
Appending data sets
Keep only new items
Keep only existing items
This workflow will backup your workflows to Github. It uses the public api to export all of the workflow data using the n8n node.
It then loops over the data checks in Github to see if a file exists that uses the workflow name. Once checked it will then update the file on Github if it exists, Create a new file if it doesn't exist and if it's the same it will ignore the file.
Config Options
repo_owner - Github owner
repo_name - Github repository name
repo_path - Path within the Github repository
>This workflow has been updated to use the n8n node and the code node so requires at least version 0.198.0 of n8n
This workflow allows extracting data from multiple pages website.
The workflow:
1) Starts in a country list at https://www.theswiftcodes.com/browse-by-country/.
2) Loads every country page (https://www.theswiftcodes.com/albania/)
3) Paginates every page in the country page.
4) Extracts data from the country page.
5) Saves data to MongoDB.
6) Paginates through all pages in all countries.
It uses getWorkflowStaticData('global') method to recover the next page (saved from the previous page), and it goes ahead with all the pages.
There is a first section where the countries list is recovered and extracted.
Later, I try to read if a local cache page is available and I recover the cached page from the disk.
Finally, I save data to MongoDB, and we paginate all the pages in the country and for all the countries.
I have applied a cache system to save a visited page to n8n local disk. If I relaunch workflow, we check if a cache file exists to discard non-required requests to the webpage.
If the data present in the website changes, you can apply a Cron node to check the website once per week.
Finally, before inserting data in MongoDB, the best way to avoid duplicates is to check that swift_code (the primary value of the collection) doesn't exist.
I recommend using a proxy for all requests to avoid IP blocks. A good solution for proxy plus IP rotation is scrapoxy.io.
This workflow is perfect for small data requirements. If you need to scrape dynamic data, you can use a Headless browser or any other service.
If you want to scrape huge lists of URIs, I recommend using Scrapy + Scrapoxy.
This workflow allows to scrape Google Maps data in an efficient way using SerpAPI.
You'll get all data from Gmaps at a cheaper cost than Google Maps API.
Add as input, your Google Maps search URL and you'll get a list of places with many data points such as:
phone number
website
rating
reviews
address
And much more.
Full guide to implement the workflow is here:
https://lempire.notion.site/Scrape-Google-Maps-places-with-n8n-b7f1785c3d474e858b7ee61ad4c21136?pvs=4
Temporary solution using the undocumented REST API for backups using Google drive.
Please note that there are issues with this workflow. It does not support versioning, so please know that it will create multiple copies of the workflows so if you run this daily it will make the folder grow quickly. Once I figure out how to version in Gdrive I'll update it here.