Task:
Create a simple API endpoint using the Webhook and Respond to Webhook nodes
Why:
You can prototype or replace a backend process with a single workflow
Main use cases:
Replace backend logic with a workflow
Want to learn the basics of n8n? Our comprehensive quick quickstart tutorial is here to guide you through the basics of n8n, step by step.
Designed with beginners in mind, this tutorial provides a hands-on approach to learning n8n's basic functionalities.
You still can use the app in a workflow even if we don’t have a node for that or the existing operation for that. With the HTTP Request node, it is possible to call any API point and use the incoming data in your workflow
Main use cases:
Connect with apps and services that n8n doesn’t have integration with
Web scraping
How it works
This workflow can be divided into three branches, each serving a distinct purpose:
1.Splitting into Items (HTTP Request - Get Mock Albums):
The workflow initiates with a manual trigger (On clicking 'execute').
It performs an HTTP request to retrieve mock albums data from "https://jsonplaceholder.typicode.com/albums."
The obtained data is split into items using the Item Lists node, facilitating easier management.
2.Data Scraping (HTTP Request - Get Wikipedia Page and HTML Extract):
Another branch of the workflow involves fetching a random Wikipedia page using an HTTP request to "https://en.wikipedia.org/wiki/Special:Random."
The HTML Extract node extracts the article title from the fetched Wikipedia page.
3.Handling Pagination (The final branch deals with handling pagination for a GitHub API request):
It sends an HTTP request to "https://api.github.com/users/that-one-tom/starred," with parameters like the page number and items per page dynamically set by the Set node.
The workflow uses conditions (If - Are we finished?) to check if there are more pages to retrieve and increments the page number accordingly (Set - Increment Page).
This process repeats until all pages are fetched, allowing for comprehensive data retrieval.
Task:
Merge two datasets into one based on matching rules
Why:
A powerful capability of n8n is to easily branch out the workflow in order to process different datasets. Even more powerful is the ability to join them back together with SQL-like joining logic.
Main use cases:
Appending data sets
Keep only new items
Keep only existing items
This workflow will backup your workflows to Github. It uses the public api to export all of the workflow data using the n8n node.
It then loops over the data checks in Github to see if a file exists that uses the workflow name. Once checked it will then update the file on Github if it exists, Create a new file if it doesn't exist and if it's the same it will ignore the file.
Config Options
repo_owner - Github owner
repo_name - Github repository name
repo_path - Path within the Github repository
>This workflow has been updated to use the n8n node and the code node so requires at least version 0.198.0 of n8n
Send a simple JSON array via HTTP POST and get an Excel file. The default filename is Export.xlsx. By adding the (optional) request ?filename=xyz you can specify the filename.
NOTE: do not forget to change the webhook path!
This workflow employs OpenAI's language models and SerpAPI to create a responsive, intelligent conversational agent. It comes equipped with manual chat triggers and memory buffer capabilities to ensure seamless interactions.
To use this template, you need to be on n8n version 1.50.0 or later.
This workflow integrates both web scraping and NLP functionalities. It uses HTML parsing to extract links, HTTP requests to fetch essay content, and AI-based summarization using GPT-4o. It's an excellent example of an end-to-end automated task that is not only efficient but also provides real value by summarizing valuable content.
Note that to use this template, you need to be on n8n version 1.50.0 or later.
⚙️🛠️🚀🤖🦾
This template is a PoC of a ReAct AI Agent capable of fetching random pages (not only Wikipedia or Google search results).
On the top part there's a manual chat node connected to a LangChain ReAct Agent. The agent has access to a workflow tool for getting page content.
The page content extraction starts with converting query parameters into a JSON object. There are 3 pre-defined parameters:
url** – an address of the page to fetch
method** = full / simplified
maxlimit** - maximum length for the final page. For longer pages an error message is returned back to the agent
Page content fetching is a multistep process:
An HTTP Request mode tries to get the page content.
If the page content was successfuly retrieved, a series of post-processing begin:
Extract HTML BODY; content
Remove all unnecessary tags to recude the page size
Further eliminate external URLs and IMG scr values (based on the method query parameter)
Remaining HTML is converted to Markdown, thus recuding the page lengh even more while preserving the basic page structure
The remaining content is sent back to an Agent if it's not too long (maxlimit = 70000 by default, see CONFIG node).
NB:
You can isolate the HTTP Request part into a separate workflow.
Check the Workflow Tool description, it guides the agent to provide a query string with several parameters instead of a JSON object.
Please reach out to Eduard is you need further assistance with you n8n workflows and automations!
Note that to use this template, you need to be on n8n version 1.19.4 or later.
Enrich your company lists with OpenAI GPT-3 ↓
You’ll get valuable information such as:
Market (B2B or B2C)
Industry
Target Audience
Value Proposition
This will help you to:
add more personalization to your outreach
make informed decisions about which accounts to target
I've made the process easy with an n8n workflow.
Here is what it does:
Retrieve website URLs from Google Sheets
Extract the content for each website
Analyze it with GPT-3
Update Google Sheets with GPT-3 data
The workflow starts by listening for messages from Telegram users. The message is then processed, and based on its content, different actions are taken. If it's a regular chat message, the workflow generates a response using the OpenAI API and sends it back to the user. If it's a command to create an image, the workflow generates an image using the OpenAI API and sends the image to the user. If the command is unsupported, an error message is sent. Throughout the workflow, there are additional nodes for displaying notes and simulating typing actions.
The workflow first populates a Pinecone index with vectors from a Bitcoin whitepaper. Then, it waits for a manual chat message. When received, the chat message is turned into a vector and compared to the vectors in Pinecone. The most similar vectors are retrieved and passed to OpenAI for generating a chat response.
Note that to use this template, you need to be on n8n version 1.19.4 or later.