Jira Software node
Figma Trigger (Beta) node

Automate Figma Versioning and Jira Updates with n8n Webhook Integration

Published 3 days ago

Created by

omiddev
omid dev

Template description

How It Works:
This n8n template automates the process of tracking design changes in Figma and updating relevant Jira issues. The template is triggered when a new version is created in Figma via a custom plugin. Once the version is committed, the plugin sends the design details to an n8n workflow using a webhook.

The workflow then performs the following actions:
Fetches the Jira issue based on the provided issue link from Figma.
Adds the design changes as a comment to the Jira issue.
Updates the status of the Jira issue based on the provided task status (e.g., "In Progress", "Done").
This streamlines the workflow, reducing the need for manual updates and ensuring that both the design team and developers have the latest design changes and task statuses in sync.

How to Use It:
Set up the Figma Plugin:

Install the Figma Commit Plugin from GitHub.
In the plugin, fill out the version name, design link, Jira issue link, and the task status.
Commit the changes in Figma, which will trigger the webhook.

Set Up the n8n Workflow:
Import this template into your n8n instance.
Connect the Figma Trigger node to capture version updates from Figma.
Configure the Jira nodes to retrieve the issue and update the status/comment based on the data sent from the plugin.

Automate:
Once the version is committed in Figma, the workflow will automatically update the Jira issue and keep both your Figma design and Jira tasks in sync!
By integrating Figma, Jira, and n8n through this template, you’ll eliminate manual steps, making collaboration between design and development teams more efficient.

Share Template

More Engineering workflow templates

Webhook node
Respond to Webhook node

Creating an API endpoint

Task: Create a simple API endpoint using the Webhook and Respond to Webhook nodes Why: You can prototype or replace a backend process with a single workflow Main use cases: Replace backend logic with a workflow
jon-n8n
Jonathan
Merge node

Joining different datasets

Task: Merge two datasets into one based on matching rules Why: A powerful capability of n8n is to easily branch out the workflow in order to process different datasets. Even more powerful is the ability to join them back together with SQL-like joining logic. Main use cases: Appending data sets Keep only new items Keep only existing items
jon-n8n
Jonathan
GitHub node
HTTP Request node
Merge node
+11

Back Up Your n8n Workflows To Github

This workflow will backup your workflows to Github. It uses the public api to export all of the workflow data using the n8n node. It then loops over the data checks in Github to see if a file exists that uses the workflow name. Once checked it will then update the file on Github if it exists, Create a new file if it doesn't exist and if it's the same it will ignore the file. Config Options repo_owner - Github owner repo_name - Github repository name repo_path - Path within the Github repository >This workflow has been updated to use the n8n node and the code node so requires at least version 0.198.0 of n8n
jon-n8n
Jonathan
HTTP Request node
+8

Scrape and store data from multiple website pages

This workflow allows extracting data from multiple pages website. The workflow: 1) Starts in a country list at https://www.theswiftcodes.com/browse-by-country/. 2) Loads every country page (https://www.theswiftcodes.com/albania/) 3) Paginates every page in the country page. 4) Extracts data from the country page. 5) Saves data to MongoDB. 6) Paginates through all pages in all countries. It uses getWorkflowStaticData('global') method to recover the next page (saved from the previous page), and it goes ahead with all the pages. There is a first section where the countries list is recovered and extracted. Later, I try to read if a local cache page is available and I recover the cached page from the disk. Finally, I save data to MongoDB, and we paginate all the pages in the country and for all the countries. I have applied a cache system to save a visited page to n8n local disk. If I relaunch workflow, we check if a cache file exists to discard non-required requests to the webpage. If the data present in the website changes, you can apply a Cron node to check the website once per week. Finally, before inserting data in MongoDB, the best way to avoid duplicates is to check that swift_code (the primary value of the collection) doesn't exist. I recommend using a proxy for all requests to avoid IP blocks. A good solution for proxy plus IP rotation is scrapoxy.io. This workflow is perfect for small data requirements. If you need to scrape dynamic data, you can use a Headless browser or any other service. If you want to scrape huge lists of URIs, I recommend using Scrapy + Scrapoxy.
mcolomer
Miquel Colomer
HTTP Request node
Merge node
+13

AI Agent To Chat With Files In Supabase Storage

Video Guide I prepared a detailed guide explaining how to set up and implement this scenario, enabling you to chat with your documents stored in Supabase using n8n. Youtube Link Who is this for? This workflow is ideal for researchers, analysts, business owners, or anyone managing a large collection of documents. It's particularly beneficial for those who need quick contextual information retrieval from text-heavy files stored in Supabase, without needing additional services like Google Drive. What problem does this workflow solve? Manually retrieving and analyzing specific information from large document repositories is time-consuming and inefficient. This workflow automates the process by vectorizing documents and enabling AI-powered interactions, making it easy to query and retrieve context-based information from uploaded files. What this workflow does The workflow integrates Supabase with an AI-powered chatbot to process, store, and query text and PDF files. The steps include: Fetching and comparing files to avoid duplicate processing. Handling file downloads and extracting content based on the file type. Converting documents into vectorized data for contextual information retrieval. Storing and querying vectorized data from a Supabase vector store. File Extraction and Processing: Automates handling of multiple file formats (e.g., PDFs, text files), and extracts document content. Vectorized Embeddings Creation: Generates embeddings for processed data to enable AI-driven interactions. Dynamic Data Querying: Allows users to query their document repository conversationally using a chatbot. Setup N8N Workflow Fetch File List from Supabase: Use Supabase to retrieve the stored file list from a specified bucket. Add logic to manage empty folder placeholders returned by Supabase, avoiding incorrect processing. Compare and Filter Files: Aggregate the files retrieved from storage and compare them to the existing list in the Supabase files table. Exclude duplicates and skip placeholder files to ensure only unprocessed files are handled. Handle File Downloads: Download new files using detailed storage configurations for public/private access. Adjust the storage settings and GET requests to match your Supabase setup. File Type Processing: Use a Switch node to target specific file types (e.g., PDFs or text files). Employ relevant tools to process the content: For PDFs, extract embedded content. For text files, directly process the text data. Content Chunking: Break large text data into smaller chunks using the Text Splitter node. Define chunk size (default: 500 tokens) and overlap to retain necessary context across chunks. Vector Embedding Creation: Generate vectorized embeddings for the processed content using OpenAI's embedding tools. Ensure metadata, such as file ID, is included for easy data retrieval. Store Vectorized Data: Save the vectorized information into a dedicated Supabase vector store. Use the default schema and table provided by Supabase for seamless setup. AI Chatbot Integration: Add a chatbot node to handle user input and retrieve relevant document chunks. Use metadata like file ID for targeted queries, especially when multiple documents are involved. Testing Upload sample files to your Supabase bucket. Verify if files are processed and stored successfully in the vector store. Ask simple conversational questions about your documents using the chatbot (e.g., "What does Chapter 1 say about the Roman Empire?"). Test for accuracy and contextual relevance of retrieved results.
lowcodingdev
Mark Shcherbakov
Google Sheets node
HTTP Request node
Item Lists node
+5

Google Maps Scraper

This workflow allows to scrape Google Maps data in an efficient way using SerpAPI. You'll get all data from Gmaps at a cheaper cost than Google Maps API. Add as input, your Google Maps search URL and you'll get a list of places with many data points such as: phone number website rating reviews address And much more. Full guide to implement the workflow is here: https://lempire.notion.site/Scrape-Google-Maps-places-with-n8n-b7f1785c3d474e858b7ee61ad4c21136?pvs=4
lucasperret
Lucas Perret

More Design workflow templates

HTTP Request node
S3 node
Respond to Webhook node
+2

Flux AI Image Generator

Easily generate images with Black Forest's Flux Text-to-Image AI models using Hugging Face’s Inference API. This template serves a webform where you can enter prompts and select predefined visual styles that are customizable with no-code. The workflow integrates seamlessly with Hugging Face's free tier, and it’s easy to modify for any Text-to-Image model that supports API access. Try it Curious what this template does? Try a public version here: https://devrel.app.n8n.cloud/form/flux Set Up Watch this quick set up video 👇 Accounts required Huggingface.co account (free) Cloudflare.com account (free - used for storage; but can be swapped easily e.g. GDrive) Key Features: Text-to-Image Creation**: Generates unique visuals based on your prompt and style. Hugging Face Integration**: Utilizes Hugging Face’s Inference API for reliable image generation. Customizable Visual Styles**: Select from preset styles or easily add your own. Adaptable**: Swap in any Hugging Face Text-to-Image model that supports API calls. Ideal for: Creators**: Rapidly create visuals for projects. Marketers**: Prototype campaign visuals. Developers**: Test different AI image models effortlessly. How It Works: You submit an image prompt via the webform and select a visual style, which appends style instructions to your prompt. The Hugging Face Inference API then generates and returns the image, which gets hosted on Cloudflare S3. The workflow can be easily adjusted to use other models and styles for complete flexibility.
max-n8n
Max Tkacz
HTTP Request node
Merge node
+5

Automatic Background Removal for Images in Google Drive

This n8n workflow simplifies the process of removing backgrounds from images stored in Google Drive. By leveraging the PhotoRoom API, this template enables automatic background removal, padding adjustments, and output formatting, all while storing the updated images back in a designated Google Drive folder. This workflow is very useful for companies or individuals that are spending a lot of time into removing the background from product images. How it Works The workflow begins with a Google Drive Trigger node that monitors a specific folder for new image uploads. Upon detecting a new image, the workflow downloads the file and extracts essential metadata, such as the file size. Configurations are set for background color, padding, output size, and more, which are all customizable to match specific requirements. The PhotoRoom API is called to process the image by removing its background and adding padding based on the settings. The processed image is saved back to Google Drive in the specified output folder with an updated name indicating the background has been removed. Requirements PhotoRoom API Key Google Drive API Access Customizing the Workflow Easily adjust the background color, padding, and output size using the configuration node. Modify the output folder path in Google Drive or replace Google Drive with another storage service if needed. For advanced use cases, integrate further image processing steps, such as adding captions or analyzing content using AI.
simonfes
Simon
HTTP Request node
Merge node
+5

🎨 Interactive Image Editor with FLUX.1 Fill Tool for Inpainting

> Like this template? Connect with Eduard via LinkedIn. This workflow is a prototype of an AI-powered image editing interface, similar to Photoshop's Generative Fill feature, but running entirely in the browser. It provides a web-based editor that allows users to: Select areas in images using an adjustable brush tool Input text prompts to guide the AI generation Compare original and generated images side by side Iterate on edits with different prompts and settings Save or reuse generated images > 🎨 Perfect for product catalog management, seasonal content updates, and creative image editing tasks! 📋 Requirements FLUX API Access: You'll need API credentials from FLUX to use this workflow. Configure the HTTP Header Auth credential in n8n with your FLUX API key 🔧 Key Components FLUX Fill API for AI-powered image generation Konva.js for canvas manipulation img-comparison-slider for result visualization Custom CSS/JS for editor functionality Simple Editor Interface HTML page with an editor is served on the Webhook call Adjustable brush selection tool Provides several mock examples and allows uploading custom images Basic prompt and FLUX model parameter controls Image Processing Pipeline Handles image and mask separately Processes FLUX Fill API requests Delivers results back to the editor Result Viewer Split-screen comparison of original and generated images Interactive slider for before/after comparison Options to save or continue editing Support for multiple iteration cycles 🎯 Use Cases This prototype is particularly useful for: Testing AI-powered image editing concepts Quick product visualization experiments Exploring creative image variations Demonstrating inpainting capabilities > 💡 Pro Tip: Save masks for frequently edited areas to quickly generate variations with different prompts! The workflow can be extended to integrate with various data sources and can be customized for specific business needs.
eduard
Eduard
HTTP Request node
Merge node
Redis node
+24

Realtime Notion Todoist 2-way Sync with Redis

Purpose This solution enables you to manage all your Notion and Todoist tasks from different workspaces as well as your calendar events in a single place. All tasks can be managed in Todoist and additionally Fantastical can be used to manage scheduled tasks & events all together. Demo & Explanation How it works The realtime sync consists of two workflows, both triggered by a registered webhook from either Notion or Todoist To avoid overwrites by lately arriving webhook calls, every time the current task is retrieved from both sides. Redis is used to prevent from endless loops, since an update in one system triggers another webhook call again. Using the ID of the task, the trigger is being locked down for 15 seconds. Depending on the detected changes, the other side is updated accordingly. Generally Notion is treaded as the main source. Using an "Obsolete" Status, it is guaranteed, that tasks never get deleted entirely by accident. The Todoist ID is stored in the Notion task, so they stay linked together An additional full sync workflow daily fixes inconsistencies, if any of them occurred, since webhooks cannot be trusted entirely. Since Todoist requires a more complex setup, a tiny workflow helps with activating the webhook. Another tiny workflow helps generating a global config, which is used by all workflows for mapping purposes. Mapping (Notion >> Todoist) Name: Task Name Priority: Priority (1: do first, 2: urgent, 3: important, 4: unset) Due: Date Status: Section (Done: completed, Obsolete: deleted) <page_link>: Description (read-only) Todoist ID: <task_id> Current limitations Changes on the same task cannot be made simultaneously in both systems within a 15-20 second time frame Subtasks are not linked automatically to their parent yet Recurring tasks are not supported yet Tasks names do not support URL’s yet Prerequisites Notion A database must already exist (get a basic template here) with the following properties (case matters!): Text: "Name" Status: "Status", containing at least the options "Backlog", "In progress", "Done", "Obsolete" Select: "Priority", containing the options "do first", "urgent", "important" Date: "Due" Checkbox: "Focus" Text: "Todoist ID" Todoist A project must already exist with the same sections like defined as Status in Notion (except Done and Obsolete) Redis Create a Free Redis Cloud instance or self-host Setup The setup involves quite a lot of steps, yet many of them can be automated for business internal purposes. Just follow the video or do the following steps: Setup credentials for Notion (access token), Todoist (access token) and Redis - you can also create empty credentials and populate these later during further setup Clone this workflow by clicking the "Use workflow" button and then choosing your n8n instance - otherwise you need to map the credentials of many nodes. Follow the instructions described within the bundle of sticky notes on the top left of the workflow How to use You can apply changes (create, update, delete) to tasks both in Notion and Todoist which then get synced over within a couple of seconds (this is handled by the differential realtime sync) The daily running full sync, resolves possible discrepancies in Todoist and sends a summary via email, if anything needed to be updated. In case that contains an unintended change, you can jump to the Task from the email directly to fix it manually.
octionic
Mario
HTTP Request node
Wordpress node

Generate & Upload an Audio Summary of a WordPress (or Woocommerce) Article

This workflow automates the process of summarizing or transcribing a WordPress article, converting the text into speech using Eleven Labs API, and uploading the resulting MP3 file back to WordPress. How It Works Trigger – The workflow starts manually when the user clicks “Test Workflow”. Retrieve Article – It fetches a WordPress article based on a given post ID. Summarize or Transcribe – An LLM (GPT-4o-mini) generates either: • A summary of the article, or • A full transcription, depending on the chosen prompt. Generate Speech – The processed text (summary or transcription) is converted into an MP3 audio file using Eleven Labs API. Upload MP3 to WordPress – The generated MP3 file is uploaded to WordPress. Update WordPress Post – The article is updated with an embedded audio player, allowing users to listen to the summary or transcription. Set Up Steps WordPress API Credentials • Configure your WordPress API credentials in n8n. Eleven Labs API Key • Obtain an API Key from Eleven Labs and configure it in n8n. Choose Between Summary or Transcription • Modify the AI prompt to either generate a summary or keep the full transcription. Test the Workflow • Run the workflow and ensure the MP3 file is correctly generated and uploaded. 💡 Customization Options • Modify the AI prompt to switch between a summary and a transcription. • Change the voice model in Eleven Labs for different speech styles. • Adjust output format to higher/lower quality MP3. 🚀 This automation improves content accessibility and engagement by allowing users to listen to a summarized or full version of the article.
phil
phil

Implement complex processes faster with n8n

red icon yellow icon red icon yellow icon