GitHub node
+6

Fetch Dynamic Prompts from GitHub and Auto-Populate n8n Expressions in Prompt

Published 23 days ago

Template description

Who Is This For?

This workflow is designed for AI engineers, automation specialists, and content creators who need a scalable system to dynamically manage prompts stored in GitHub. It eliminates manual updates, enforces required variable checks, and ensures that AI interactions always receive fully processed prompts.


🚀 What Problem Does This Solve?

Manually managing AI prompts can be inefficient and error-prone. This workflow:
Fetches dynamic prompts from GitHub
Auto-populates placeholders with values from the setVars node
Ensures all required variables are present before execution
Processes the formatted prompt through an AI agent


🛠 How This Workflow Works

This workflow consists of three key branches, ensuring smooth prompt retrieval, variable validation, and AI processing.


1️⃣ Retrieve the Prompt from GitHub (HTTP Request → Extract from File → SetPrompt)

  • The workflow starts manually or via an external trigger.
  • It fetches a text-based prompt stored in a GitHub repository.
  • The Extract from File Node retrieves the content from the GitHub file.
  • The SetPrompt Node stores the prompt, making it accessible for processing.

📌 Note:
The prompt must contain n8n expression format variables (e.g., {{ $json.company }}) so they can be dynamically replaced.


2️⃣ Extract & Auto-Populate Variables (Check All Prompt Vars → Replace Variables)

  • A Code Node scans the prompt for placeholders in the n8n expression format ({{ $json.variableName }}).
  • The workflow compares required variables against the setVars node:
    • If all variables are present, it proceeds to variable replacement.
    • If any variables are missing, the workflow stops and returns an error listing them.
  • The Replace Variables Node replaces all placeholders with values from setVars.

📌 Example of a properly formatted GitHub prompt:

Hello {{ $json.company }}, your product {{ $json.features }} launches on {{ $json.launch_date }}.

This ensures seamless replacement when processed in n8n.


3️⃣ AI Processing & Output (AI Agent → Prompt Output)

  • The Set Completed Prompt Node stores the final, processed prompt.
  • The AI Agent Node (Ollama Chat Model) processes the prompt.
  • The Prompt Output Node returns the fully formatted response.

📌 Optional: Modify this to use OpenAI, Claude, or other AI models.


⚠️ Error Handling: Missing Variables

If a required variable is missing, the workflow stops execution and provides an error message:

⚠️ Missing Required Variables: ["launch_date"]

This ensures no incomplete prompts are sent to AI agents.


✅ Example Use Case

📜 GitHub Prompt File (Using n8n Expressions)

Hello {{ $json.company }}, your product {{ $json.features }} launches on {{ $json.launch_date }}.

🔹 Variables in setVars Node

{
  "company": "PropTechPro",
  "features": "AI-powered Property Management",
  "launch_date": "March 15, 2025"
}

✅ Successful Output

Hello PropTechPro, your product AI-powered Property Management launches on March 15, 2025.

🚨 Error Output (If Missing launch_date)

⚠️ Missing Required Variables: ["launch_date"]

🔧 Setup Instructions

1️⃣ Connect Your GitHub Repository

  • Store your prompt in a public or private GitHub repo.
  • The workflow will fetch the raw file using the GitHub API.

2️⃣ Configure the SetVars Node

  • Define the required variables in the SetVars Node.
  • Make sure the variable names match those used in the prompt.

3️⃣ Test & Run

  • Click Test Workflow to execute.
  • If variables are missing, it will show an error.
  • If everything is correct, it will output the fully formatted prompt.

⚡ How to Customize This Workflow

💡 Need CRM or Database Integration?

  • Connect the setVars node to an Airtable, Google Sheets, or HubSpot API to pull variables dynamically.

💡 Want to Modify the AI Model?

  • Replace the Ollama Chat Model with OpenAI, Claude, or a custom LLM endpoint.

📌 Why Use This Workflow?

No Manual Updates Required – Fetches prompts dynamically from GitHub.
Prevents Broken Prompts – Ensures required variables exist before execution.
Works for Any Use Case – Handles AI chat prompts, marketing messages, and chatbot scripts.
Compatible with All n8n Deployments – Works on Cloud, Self-Hosted, and Desktop versions.

Share Template

More Engineering workflow templates

Webhook node
Respond to Webhook node

Creating an API endpoint

Task: Create a simple API endpoint using the Webhook and Respond to Webhook nodes Why: You can prototype or replace a backend process with a single workflow Main use cases: Replace backend logic with a workflow
jon-n8n
Jonathan
Merge node

Joining different datasets

Task: Merge two datasets into one based on matching rules Why: A powerful capability of n8n is to easily branch out the workflow in order to process different datasets. Even more powerful is the ability to join them back together with SQL-like joining logic. Main use cases: Appending data sets Keep only new items Keep only existing items
jon-n8n
Jonathan
GitHub node
HTTP Request node
Merge node
+11

Back Up Your n8n Workflows To Github

This workflow will backup your workflows to Github. It uses the public api to export all of the workflow data using the n8n node. It then loops over the data checks in Github to see if a file exists that uses the workflow name. Once checked it will then update the file on Github if it exists, Create a new file if it doesn't exist and if it's the same it will ignore the file. Config Options repo_owner - Github owner repo_name - Github repository name repo_path - Path within the Github repository >This workflow has been updated to use the n8n node and the code node so requires at least version 0.198.0 of n8n
jon-n8n
Jonathan
HTTP Request node
+8

Scrape and store data from multiple website pages

This workflow allows extracting data from multiple pages website. The workflow: 1) Starts in a country list at https://www.theswiftcodes.com/browse-by-country/. 2) Loads every country page (https://www.theswiftcodes.com/albania/) 3) Paginates every page in the country page. 4) Extracts data from the country page. 5) Saves data to MongoDB. 6) Paginates through all pages in all countries. It uses getWorkflowStaticData('global') method to recover the next page (saved from the previous page), and it goes ahead with all the pages. There is a first section where the countries list is recovered and extracted. Later, I try to read if a local cache page is available and I recover the cached page from the disk. Finally, I save data to MongoDB, and we paginate all the pages in the country and for all the countries. I have applied a cache system to save a visited page to n8n local disk. If I relaunch workflow, we check if a cache file exists to discard non-required requests to the webpage. If the data present in the website changes, you can apply a Cron node to check the website once per week. Finally, before inserting data in MongoDB, the best way to avoid duplicates is to check that swift_code (the primary value of the collection) doesn't exist. I recommend using a proxy for all requests to avoid IP blocks. A good solution for proxy plus IP rotation is scrapoxy.io. This workflow is perfect for small data requirements. If you need to scrape dynamic data, you can use a Headless browser or any other service. If you want to scrape huge lists of URIs, I recommend using Scrapy + Scrapoxy.
mcolomer
Miquel Colomer
HTTP Request node
Merge node
+13

AI Agent To Chat With Files In Supabase Storage

Video Guide I prepared a detailed guide explaining how to set up and implement this scenario, enabling you to chat with your documents stored in Supabase using n8n. Youtube Link Who is this for? This workflow is ideal for researchers, analysts, business owners, or anyone managing a large collection of documents. It's particularly beneficial for those who need quick contextual information retrieval from text-heavy files stored in Supabase, without needing additional services like Google Drive. What problem does this workflow solve? Manually retrieving and analyzing specific information from large document repositories is time-consuming and inefficient. This workflow automates the process by vectorizing documents and enabling AI-powered interactions, making it easy to query and retrieve context-based information from uploaded files. What this workflow does The workflow integrates Supabase with an AI-powered chatbot to process, store, and query text and PDF files. The steps include: Fetching and comparing files to avoid duplicate processing. Handling file downloads and extracting content based on the file type. Converting documents into vectorized data for contextual information retrieval. Storing and querying vectorized data from a Supabase vector store. File Extraction and Processing: Automates handling of multiple file formats (e.g., PDFs, text files), and extracts document content. Vectorized Embeddings Creation: Generates embeddings for processed data to enable AI-driven interactions. Dynamic Data Querying: Allows users to query their document repository conversationally using a chatbot. Setup N8N Workflow Fetch File List from Supabase: Use Supabase to retrieve the stored file list from a specified bucket. Add logic to manage empty folder placeholders returned by Supabase, avoiding incorrect processing. Compare and Filter Files: Aggregate the files retrieved from storage and compare them to the existing list in the Supabase files table. Exclude duplicates and skip placeholder files to ensure only unprocessed files are handled. Handle File Downloads: Download new files using detailed storage configurations for public/private access. Adjust the storage settings and GET requests to match your Supabase setup. File Type Processing: Use a Switch node to target specific file types (e.g., PDFs or text files). Employ relevant tools to process the content: For PDFs, extract embedded content. For text files, directly process the text data. Content Chunking: Break large text data into smaller chunks using the Text Splitter node. Define chunk size (default: 500 tokens) and overlap to retain necessary context across chunks. Vector Embedding Creation: Generate vectorized embeddings for the processed content using OpenAI's embedding tools. Ensure metadata, such as file ID, is included for easy data retrieval. Store Vectorized Data: Save the vectorized information into a dedicated Supabase vector store. Use the default schema and table provided by Supabase for seamless setup. AI Chatbot Integration: Add a chatbot node to handle user input and retrieve relevant document chunks. Use metadata like file ID for targeted queries, especially when multiple documents are involved. Testing Upload sample files to your Supabase bucket. Verify if files are processed and stored successfully in the vector store. Ask simple conversational questions about your documents using the chatbot (e.g., "What does Chapter 1 say about the Roman Empire?"). Test for accuracy and contextual relevance of retrieved results.
lowcodingdev
Mark Shcherbakov
OpenAI Chat Model node

Chat with Postgresql Database

Who is this template for? This workflow template is designed for any professionals seeking relevent data from database using natural language. How it works Each time user ask's question using the n8n chat interface, the workflow runs. Then the message is processed by AI Agent using relevent tools - Execute SQL Query, Get DB Schema and Tables List and Get Table Definition, if required. Agent uses these tool to form and run sql query which are necessary to answer the questions. Once AI Agent has the data, it uses it to form answer and returns it to the user. Set up instructions Complete the Set up credentials step when you first open the workflow. You'll need a Postgresql Credentials, and OpenAI api key. Template was created in n8n v1.77.0
kumohq
KumoHQ

More AI workflow templates

OpenAI Chat Model node
SerpApi (Google Search) node

AI agent chat

This workflow employs OpenAI's language models and SerpAPI to create a responsive, intelligent conversational agent. It comes equipped with manual chat triggers and memory buffer capabilities to ensure seamless interactions. To use this template, you need to be on n8n version 1.50.0 or later.
n8n-team
n8n Team
HTTP Request node
Merge node
+7

Scrape and summarize webpages with AI

This workflow integrates both web scraping and NLP functionalities. It uses HTML parsing to extract links, HTTP requests to fetch essay content, and AI-based summarization using GPT-4o. It's an excellent example of an end-to-end automated task that is not only efficient but also provides real value by summarizing valuable content. Note that to use this template, you need to be on n8n version 1.50.0 or later.
n8n-team
n8n Team
HTTP Request node
Markdown node
+5

AI agent that can scrape webpages

⚙️🛠️🚀🤖🦾 This template is a PoC of a ReAct AI Agent capable of fetching random pages (not only Wikipedia or Google search results). On the top part there's a manual chat node connected to a LangChain ReAct Agent. The agent has access to a workflow tool for getting page content. The page content extraction starts with converting query parameters into a JSON object. There are 3 pre-defined parameters: url** – an address of the page to fetch method** = full / simplified maxlimit** - maximum length for the final page. For longer pages an error message is returned back to the agent Page content fetching is a multistep process: An HTTP Request mode tries to get the page content. If the page content was successfuly retrieved, a series of post-processing begin: Extract HTML BODY; content Remove all unnecessary tags to recude the page size Further eliminate external URLs and IMG scr values (based on the method query parameter) Remaining HTML is converted to Markdown, thus recuding the page lengh even more while preserving the basic page structure The remaining content is sent back to an Agent if it's not too long (maxlimit = 70000 by default, see CONFIG node). NB: You can isolate the HTTP Request part into a separate workflow. Check the Workflow Tool description, it guides the agent to provide a query string with several parameters instead of a JSON object. Please reach out to Eduard is you need further assistance with you n8n workflows and automations! Note that to use this template, you need to be on n8n version 1.19.4 or later.
eduard
Eduard
HTTP Request node
WhatsApp Business Cloud node
+10

Building Your First WhatsApp Chatbot

This n8n template builds a simple WhatsApp chabot acting as a Sales Agent. The Agent is backed by a product catalog vector store to better answer user's questions. This template is intended to help introduce n8n users interested in building with WhatsApp. How it works This template is in 2 parts: creating the product catalog vector store and building the WhatsApp AI chatbot. A product brochure is imported via HTTP request node and its text contents extracted. The text contents are then uploaded to the in-memory vector store to build a knowledgebase for the chatbot. A WhatsApp trigger is used to capture messages from customers where non-text messages are filtered out. The customer's message is sent to the AI Agent which queries the product catalogue using the vector store tool. The Agent's response is sent back to the user via the WhatsApp node. How to use Once you've setup and configured your WhatsApp account and credentials First, populate the vector store by clicking the "Test Workflow" button. Next, activate the workflow to enable the WhatsApp chatbot. Message your designated WhatsApp number and you should receive a message from the AI sales agent. Tweak datasource and behaviour as required. Requirements WhatsApp Business Account OpenAI for LLM Customising this workflow Upgrade the vector store to Qdrant for persistance and production use-cases. Handle different WhatsApp message types for a more rich and engaging experience for customers.
jimleuk
Jimleuk
Merge node
Telegram node
Telegram Trigger node
+2

Telegram AI Chatbot

The workflow starts by listening for messages from Telegram users. The message is then processed, and based on its content, different actions are taken. If it's a regular chat message, the workflow generates a response using the OpenAI API and sends it back to the user. If it's a command to create an image, the workflow generates an image using the OpenAI API and sends the image to the user. If the command is unsupported, an error message is sent. Throughout the workflow, there are additional nodes for displaying notes and simulating typing actions.
eduard
Eduard
Google Drive node
Binary Input Loader node
Embeddings OpenAI node
OpenAI Chat Model node
+5

Ask questions about a PDF using AI

The workflow first populates a Pinecone index with vectors from a Bitcoin whitepaper. Then, it waits for a manual chat message. When received, the chat message is turned into a vector and compared to the vectors in Pinecone. The most similar vectors are retrieved and passed to OpenAI for generating a chat response. Note that to use this template, you need to be on n8n version 1.19.4 or later.
davidn8n
David Roberts

Implement complex processes faster with n8n

red icon yellow icon red icon yellow icon