Back to Templates

Web Site Scraper for LLMs with Airtop

Created by

Created by: Airtop || cesar-at-airtop

Airtop

Last update

Last update a month ago

Categories

Share


Recursive Web Scraping

Use Case

Automating web scraping with recursive depth is ideal for collecting content across multiple linked pages—perfect for content aggregation, lead generation, or research projects.

What This Automation Does

This automation reads a list of URLs from a Google Sheet, scrapes each page, stores the content in a document, and adds newly discovered links back to the sheet. It continues this process for a specified number of iterations based on the defined scraping depth.

Input Parameters:

  • Seed URL: The starting URL to begin the scraping process.
    Example: https://example.com/
  • Links must contain: Restricts the links to those that contain this specified string.
    Example: https://example.com/
  • Depth: The number of iterations (layers of links) to scrape beyond the initial set.
    Example: 3

How It Works

  1. Starts by reading the Seed URL from the Google Sheet.
  2. Scrapes each page and saves its content to the specified document.
  3. Extracts new links from each page that match the Links must contain string, appends them to the Google Sheet.
  4. Repeats steps 2–3 for the number of times specified by Depth - 1.

Setup Requirements

  1. Airtop API Key — free to generate.
  2. Credentials set up for Google Docs (requires creating a project on Google Console). Read how to.
  3. Credentials set up for Google Spreadsheet.

Next Steps

  • Add Filtering Rules: Filter which links to follow based on domain, path, or content type.
  • Combine with Scheduler: Run this automation on a schedule to continuously explore newly discovered pages.
  • Export Structured Data: Extend the process to store extracted data in a CSV or database for analysis.

Read more about website scraping for LLMS