HTTP Request node
+4

Namesilo Bulk Domain Availability Checker

Published 7 days ago

Categories

Template description

Introduction

The namesilo Bulk Domain Availability workflow is a powerful automation solution designed to check the registration status of multiple domains simultaneously using the Namesilo API.

This workflow efficiently processes large lists of domains by splitting them into manageable batches, adhering to API rate limits, and compiling the results into a convenient Excel spreadsheet.

It eliminates the tedious process of manually checking domains one by one, saving significant time for domain investors, web developers, and digital marketers. The workflow is particularly valuable during brainstorming sessions for new projects, when conducting domain portfolio audits, or when preparing domain acquisition strategies.

By automating the domain availability check process, users can quickly identify available domains for registration without the hassle of navigating through multiple web interfaces.

Who is this for?

This workflow is ideal for:

  • Domain investors and flippers who need to check multiple domains quickly
  • Web developers and agencies evaluating domain options for client projects
  • Digital marketers researching domain availability for campaigns
  • Business owners exploring domain options for new ventures
  • IT professionals managing domain portfolios

Users should have basic familiarity with n8n workflow concepts and a Namesilo account to obtain an API key. No coding knowledge is required, though understanding of domain name systems would be beneficial.

What problem is this workflow solving?

Checking domain availability one-by-one is a time-consuming and tedious process, especially when dealing with dozens or hundreds of potential domains. This workflow solves several key challenges:

  1. Manual Inefficiency: Eliminates the need to individually search for each domain through registrar websites.
  2. Rate Limiting: Handles API rate limits automatically with built-in waiting periods.
  3. Data Organization: Compiles availability results into a structured Excel file rather than scattered notes or multiple browser tabs.
  4. Bulk Processing: Processes up to 200 domains per batch, with the ability to handle unlimited domains across multiple batches.
  5. Time Management: Frees up valuable time that would otherwise be spent on repetitive manual checks.

What this workflow does

Overview

The workflow takes a list of domains, processes them in batches of up to 200 domains per request (to comply with API limitations), checks their availability using the Namesilo API, and compiles the results into an Excel spreadsheet showing which domains are available for registration and which are already taken.

Process

  1. Input Setup: The workflow begins with a manual trigger and uses the "Set Data" node to collect the list of domains to check and your Namesilo API key.
  2. Domain Processing: The "Convert & Split Domains" node transforms the input list into batches of up to 200 domains to comply with API limitations.
  3. Batch Processing: The workflow loops through each batch of domains.
  4. API Integration: For each batch, the "Namesilo Requests" node sends a request to the Namesilo API to check domain availability.
  5. Data Parsing: The "Parse Data" node processes the API response, extracting information about which domains are available and which are taken.
  6. Rate Limit Management: A 5-minute wait period is enforced between batches to respect Namesilo's API rate limits.
  7. Data Compilation: The "Merge Results" node combines all the availability data.
  8. Output Generation: Finally, the "Convert to Excel" node creates an Excel file with two columns: Domain and Availability (showing "Available" or "Unavailable" for each domain).

Setup

  1. Import the workflow: Download the workflow JSON file and import it into your n8n instance.
  2. Get Namesilo API key: Create a free account at Namesilo and obtain your API key from https://www.namesilo.com/account/api-manager
  3. Configure the workflow:
    • Open the "Set Data" node
    • Enter your Namesilo API key in the "Namesilo API Key" field
    • Enter your list of domains (one per line) in the "Domains" field
  4. Save and activate: Save the workflow and run it using the manual trigger.

How to customize this workflow to your needs

  • Modify domain input format: You can adjust the code in the "Convert & Split Domains" node if your domain list comes in a different format.
  • Change batch size: If needed, you can modify the batch size (currently set to 200) in the "Convert & Split Domains" node to accommodate different API limitations.
  • Adjust wait time: If you have a premium API account with different rate limits, you can modify the wait time in the "Wait" node.
  • Enhance output format: Customize the "Convert to Excel" node to add additional columns or formatting to the output file.
  • Add domain filtering: You could add a node before the API request to filter domains based on specific criteria (length, keywords, TLDs).
  • Integrate with other services: Connect this workflow to domain registrars to automatically register available domains that meet your criteria.

Share Template

More Engineering workflow templates

Webhook node
Respond to Webhook node

Creating an API endpoint

Task: Create a simple API endpoint using the Webhook and Respond to Webhook nodes Why: You can prototype or replace a backend process with a single workflow Main use cases: Replace backend logic with a workflow
jon-n8n
Jonathan
Merge node

Joining different datasets

Task: Merge two datasets into one based on matching rules Why: A powerful capability of n8n is to easily branch out the workflow in order to process different datasets. Even more powerful is the ability to join them back together with SQL-like joining logic. Main use cases: Appending data sets Keep only new items Keep only existing items
jon-n8n
Jonathan
GitHub node
HTTP Request node
Merge node
+11

Back Up Your n8n Workflows To Github

This workflow will backup your workflows to Github. It uses the public api to export all of the workflow data using the n8n node. It then loops over the data checks in Github to see if a file exists that uses the workflow name. Once checked it will then update the file on Github if it exists, Create a new file if it doesn't exist and if it's the same it will ignore the file. Config Options repo_owner - Github owner repo_name - Github repository name repo_path - Path within the Github repository >This workflow has been updated to use the n8n node and the code node so requires at least version 0.198.0 of n8n
jon-n8n
Jonathan
HTTP Request node
+8

Scrape and store data from multiple website pages

This workflow allows extracting data from multiple pages website. The workflow: 1) Starts in a country list at https://www.theswiftcodes.com/browse-by-country/. 2) Loads every country page (https://www.theswiftcodes.com/albania/) 3) Paginates every page in the country page. 4) Extracts data from the country page. 5) Saves data to MongoDB. 6) Paginates through all pages in all countries. It uses getWorkflowStaticData('global') method to recover the next page (saved from the previous page), and it goes ahead with all the pages. There is a first section where the countries list is recovered and extracted. Later, I try to read if a local cache page is available and I recover the cached page from the disk. Finally, I save data to MongoDB, and we paginate all the pages in the country and for all the countries. I have applied a cache system to save a visited page to n8n local disk. If I relaunch workflow, we check if a cache file exists to discard non-required requests to the webpage. If the data present in the website changes, you can apply a Cron node to check the website once per week. Finally, before inserting data in MongoDB, the best way to avoid duplicates is to check that swift_code (the primary value of the collection) doesn't exist. I recommend using a proxy for all requests to avoid IP blocks. A good solution for proxy plus IP rotation is scrapoxy.io. This workflow is perfect for small data requirements. If you need to scrape dynamic data, you can use a Headless browser or any other service. If you want to scrape huge lists of URIs, I recommend using Scrapy + Scrapoxy.
mcolomer
Miquel Colomer
HTTP Request node
Merge node
+13

AI Agent To Chat With Files In Supabase Storage

Video Guide I prepared a detailed guide explaining how to set up and implement this scenario, enabling you to chat with your documents stored in Supabase using n8n. Youtube Link Who is this for? This workflow is ideal for researchers, analysts, business owners, or anyone managing a large collection of documents. It's particularly beneficial for those who need quick contextual information retrieval from text-heavy files stored in Supabase, without needing additional services like Google Drive. What problem does this workflow solve? Manually retrieving and analyzing specific information from large document repositories is time-consuming and inefficient. This workflow automates the process by vectorizing documents and enabling AI-powered interactions, making it easy to query and retrieve context-based information from uploaded files. What this workflow does The workflow integrates Supabase with an AI-powered chatbot to process, store, and query text and PDF files. The steps include: Fetching and comparing files to avoid duplicate processing. Handling file downloads and extracting content based on the file type. Converting documents into vectorized data for contextual information retrieval. Storing and querying vectorized data from a Supabase vector store. File Extraction and Processing: Automates handling of multiple file formats (e.g., PDFs, text files), and extracts document content. Vectorized Embeddings Creation: Generates embeddings for processed data to enable AI-driven interactions. Dynamic Data Querying: Allows users to query their document repository conversationally using a chatbot. Setup N8N Workflow Fetch File List from Supabase: Use Supabase to retrieve the stored file list from a specified bucket. Add logic to manage empty folder placeholders returned by Supabase, avoiding incorrect processing. Compare and Filter Files: Aggregate the files retrieved from storage and compare them to the existing list in the Supabase files table. Exclude duplicates and skip placeholder files to ensure only unprocessed files are handled. Handle File Downloads: Download new files using detailed storage configurations for public/private access. Adjust the storage settings and GET requests to match your Supabase setup. File Type Processing: Use a Switch node to target specific file types (e.g., PDFs or text files). Employ relevant tools to process the content: For PDFs, extract embedded content. For text files, directly process the text data. Content Chunking: Break large text data into smaller chunks using the Text Splitter node. Define chunk size (default: 500 tokens) and overlap to retain necessary context across chunks. Vector Embedding Creation: Generate vectorized embeddings for the processed content using OpenAI's embedding tools. Ensure metadata, such as file ID, is included for easy data retrieval. Store Vectorized Data: Save the vectorized information into a dedicated Supabase vector store. Use the default schema and table provided by Supabase for seamless setup. AI Chatbot Integration: Add a chatbot node to handle user input and retrieve relevant document chunks. Use metadata like file ID for targeted queries, especially when multiple documents are involved. Testing Upload sample files to your Supabase bucket. Verify if files are processed and stored successfully in the vector store. Ask simple conversational questions about your documents using the chatbot (e.g., "What does Chapter 1 say about the Roman Empire?"). Test for accuracy and contextual relevance of retrieved results.
lowcodingdev
Mark Shcherbakov
OpenAI Chat Model node

Chat with Postgresql Database

Who is this template for? This workflow template is designed for any professionals seeking relevent data from database using natural language. How it works Each time user ask's question using the n8n chat interface, the workflow runs. Then the message is processed by AI Agent using relevent tools - Execute SQL Query, Get DB Schema and Tables List and Get Table Definition, if required. Agent uses these tool to form and run sql query which are necessary to answer the questions. Once AI Agent has the data, it uses it to form answer and returns it to the user. Set up instructions Complete the Set up credentials step when you first open the workflow. You'll need a Postgresql Credentials, and OpenAI api key. Template was created in n8n v1.77.0
kumohq
KumoHQ

Implement complex processes faster with n8n

red icon yellow icon red icon yellow icon