Back to Templates

ScrapingBee and Google Sheets Integration Template

Created by

Created by: Sahil Sunny || scrapingbee

Sahil Sunny

Last update

Last update 3 days ago

Share


This workflow contains community nodes that are only compatible with the self-hosted version of n8n.

Screenshot 20250904 at 9.57.43 AM.png

This workflow allows users to extract sitemap links using ScrapingBee API. It only needs the domain name www.example.com and it automatically checks robots.txt and sitemap.xml to find the links. It is also designed to recursively run the workflow when new .xml links are found while scraping the sitemap.

How It Works

  1. Trigger: The workflow waits for a webhook request that contains domain=www.example.com
  2. It then looks for robots.txt file, if not found it checks sitemap.xml
  3. Once it finds xml links, it recursively scrapes them to extract the website links
  4. For each xml file, first it checks whether it's a binary file and whether it's a compressed xml
  5. If it's a text response, it directly runs a code that extracts normal website link and another code to extract xml links
  6. If it's a binary that is not compressed, it just extracts text from the binary and then extract webiste links and xml links
  7. If it's a compressed binary, it first decompresses it and then extracts the text and then the links and xml
  8. After extracting website links, it appends those links directly to a sheet
  9. After extracting xml links, it scrapes them recursively until it finds all website links

When the workflow is finished, you will see the output in the links column of the Google Sheet that we added to the workflow.

Screenshot 20250904 at 10.08.42 AM.png

Set Up Steps

  1. Get your ScrapingBee API Key here
  2. Create a new google sheet with an empty column named links. Connect to the sheet by signing in using your Google Credential and add the link to your sheet.
  3. Copy the webhook url, and send a GET request with domain as query parameter. Example:
curl "https://webhook_link?domain=scrapingbee.com"

Customisation Options

  1. If the website you are scraping is blocking your request, you can try using premium or stealth proxy in Scrape robots.txt file, Scrape sitemap.xml file, and Scrape xml file nodes.
  2. If you wish to store the data in a different app/tool or store it as a file, you would just need to replace Append links to sheet node with a relevant node.

Next Steps

If you wish to scrape the pages using the extracted links, then you can implement a new workflow that reads the sheet or file (output generated by this workflow) for links and for each link send a request to ScrapingBee's HTML API and save the returned data.


NOTE: Some heavy sitemaps could result in a crash if the workflow consumes more memory than what is available in your n8n plan or self-hosted system. If this happens, we would recommend you to either upgrade your plan or use a self-hosted solution with a higher memory.