Route a Monday.com node that gets an item to the COLUMN BY ID node
-Edit the COLUMN BY ID node, and enter the ID in the first line of code.
To retreive all linked pulses from a Board Relation column:
Route a Monday.com node that gets an item to the GET BOARD RELATION node
Edit the GET BOARD RELATION node to specify the column name.
All linked pulses will be retrieved by the subsequent PULL LINKEDPULSE node
To pull all subitems from an item:
Route a Monday.com node that gets an item to the PULL SUBITEMS node
All subitems will be retrieved by the subsequent GET EACH SUBITEM node
To upload a File:
Repalce the Convert to File node with whatever node you are using to output your binary file data
Enable the MONDAY UPLOAD node
If the destination column is named anything other then the default of "file" - edit the MONDAY UPLOAD node and change column_id:"file" in the first Value field to match the name of your file column
Task:
Create a simple API endpoint using the Webhook and Respond to Webhook nodes
Why:
You can prototype or replace a backend process with a single workflow
Main use cases:
Replace backend logic with a workflow
Want to learn the basics of n8n? Our comprehensive quick quickstart tutorial is here to guide you through the basics of n8n, step by step.
Designed with beginners in mind, this tutorial provides a hands-on approach to learning n8n's basic functionalities.
You still can use the app in a workflow even if we don’t have a node for that or the existing operation for that. With the HTTP Request node, it is possible to call any API point and use the incoming data in your workflow
Main use cases:
Connect with apps and services that n8n doesn’t have integration with
Web scraping
How it works
This workflow can be divided into three branches, each serving a distinct purpose:
1.Splitting into Items (HTTP Request - Get Mock Albums):
The workflow initiates with a manual trigger (On clicking 'execute').
It performs an HTTP request to retrieve mock albums data from "https://jsonplaceholder.typicode.com/albums."
The obtained data is split into items using the Item Lists node, facilitating easier management.
2.Data Scraping (HTTP Request - Get Wikipedia Page and HTML Extract):
Another branch of the workflow involves fetching a random Wikipedia page using an HTTP request to "https://en.wikipedia.org/wiki/Special:Random."
The HTML Extract node extracts the article title from the fetched Wikipedia page.
3.Handling Pagination (The final branch deals with handling pagination for a GitHub API request):
It sends an HTTP request to "https://api.github.com/users/that-one-tom/starred," with parameters like the page number and items per page dynamically set by the Set node.
The workflow uses conditions (If - Are we finished?) to check if there are more pages to retrieve and increments the page number accordingly (Set - Increment Page).
This process repeats until all pages are fetched, allowing for comprehensive data retrieval.