HTTP Request node
+5

Convert URL HTML to Markdown Format and Get Page Links

Published 25 days ago

Created by

simonscrapes
simonscrapes

Categories

Template description

Use Case

Transform web pages into AI-friendly markdown format:

  • You need to process webpage content for LLM analysis
  • You want to extract both content and links from web pages
  • You need clean, formatted text without HTML markup
  • You want to respect API rate limits while crawling pages

What this Workflow Does

The workflow uses Firecrawl.dev API to process webpages:

  • Converts HTML content to markdown format
  • Extracts all links from each webpage
  • Handles API rate limiting automatically
  • Processes URLs in batches from your database

Setup

  1. Create a Firecrawl.dev account and get your API key
  2. Add your Firecrawl API key to the HTTP Request node's Authorization header
  3. Connect your URL database to the input node (column name must be "Page") or edit the array in Example fields from data source
  4. Configure your preferred output database connection

How to Adjust it to Your Needs

  • Modify input source to pull URLs from different databases
  • Adjust rate limiting parameters if needed
  • Customize output format for your specific use case

Made by Simon @ automake.io

Share Template

More AI workflow templates

OpenAI Chat Model node
SerpApi (Google Search) node

AI agent chat

This workflow employs OpenAI's language models and SerpAPI to create a responsive, intelligent conversational agent. It comes equipped with manual chat triggers and memory buffer capabilities to ensure seamless interactions. To use this template, you need to be on n8n version 1.50.0 or later.
n8n-team
n8n Team
HTTP Request node
Merge node
+7

Scrape and summarize webpages with AI

This workflow integrates both web scraping and NLP functionalities. It uses HTML parsing to extract links, HTTP requests to fetch essay content, and AI-based summarization using GPT-4o. It's an excellent example of an end-to-end automated task that is not only efficient but also provides real value by summarizing valuable content. Note that to use this template, you need to be on n8n version 1.50.0 or later.
n8n-team
n8n Team
HTTP Request node
Markdown node
+5

AI agent that can scrape webpages

⚙️🛠️🚀🤖🦾 This template is a PoC of a ReAct AI Agent capable of fetching random pages (not only Wikipedia or Google search results). On the top part there's a manual chat node connected to a LangChain ReAct Agent. The agent has access to a workflow tool for getting page content. The page content extraction starts with converting query parameters into a JSON object. There are 3 pre-defined parameters: url** – an address of the page to fetch method** = full / simplified maxlimit** - maximum length for the final page. For longer pages an error message is returned back to the agent Page content fetching is a multistep process: An HTTP Request mode tries to get the page content. If the page content was successfuly retrieved, a series of post-processing begin: Extract HTML BODY; content Remove all unnecessary tags to recude the page size Further eliminate external URLs and IMG scr values (based on the method query parameter) Remaining HTML is converted to Markdown, thus recuding the page lengh even more while preserving the basic page structure The remaining content is sent back to an Agent if it's not too long (maxlimit = 70000 by default, see CONFIG node). NB: You can isolate the HTTP Request part into a separate workflow. Check the Workflow Tool description, it guides the agent to provide a query string with several parameters instead of a JSON object. Please reach out to Eduard is you need further assistance with you n8n workflows and automations! Note that to use this template, you need to be on n8n version 1.19.4 or later.
eduard
Eduard

Implement complex processes faster with n8n

red icon yellow icon red icon yellow icon