Slack node
OpenAI Chat Model node
+5

Time logging on Clockify using Slack

Published 2 months ago

Created by

blockia
Blockia Labs

Template description

Time Logging on Clockify Using Slack

How it works

This workflow simplifies time tracking for teams and agencies by integrating Slack with Clockify. It enables users to log, update, or delete time entries directly within Slack, leveraging an AI-powered assistant for seamless and conversational interactions. Key features include:

  • Effortless Time Logging: Create and manage time entries in Clockify without leaving Slack.
  • AI-Powered Assistant: Get step-by-step guidance to ensure accurate and efficient time logging.
  • Project and Client Management: Retrieve project and client information from Clockify effortlessly.
  • Overlap Prevention: Avoid overlapping entries with built-in time validation.
  • Automated Descriptions: Generate ethical, grammatically correct descriptions for time logs.

Set up steps

1. Prepare your integrations

  • Ensure you have active accounts for both Slack and Clockify.
  • Generate your Clockify API credentials for integration.

2. Import the workflow

  • Download and import the workflow template into your n8n instance.
  • Configure the workflow to connect with your Slack and Clockify accounts.

3. Configure the workflow

  • Add your Clockify API credentials in the workflow settings.
  • Set up the Slack Trigger to listen for app mentions or specific commands.

4. Test the workflow

  • Use Slack to create a time entry and verify it in Clockify.
  • Test updating and deleting existing entries to ensure smooth functionality.
  • Check for any overlapping time logs or incorrect data entries.

Why use this workflow?

  • Efficiency: Eliminate the need to switch between tools for time tracking.
  • Accuracy: AI-driven validation ensures error-free entries.
  • Automation: Simplify repetitive tasks like updating or deleting time logs.
  • Proactive Guidance: Conversational assistant ensures smooth operations.

Share Template

More Engineering workflow templates

Webhook node
Respond to Webhook node

Creating an API endpoint

Task: Create a simple API endpoint using the Webhook and Respond to Webhook nodes Why: You can prototype or replace a backend process with a single workflow Main use cases: Replace backend logic with a workflow
jon-n8n
Jonathan
Merge node

Joining different datasets

Task: Merge two datasets into one based on matching rules Why: A powerful capability of n8n is to easily branch out the workflow in order to process different datasets. Even more powerful is the ability to join them back together with SQL-like joining logic. Main use cases: Appending data sets Keep only new items Keep only existing items
jon-n8n
Jonathan
GitHub node
HTTP Request node
Merge node
+11

Back Up Your n8n Workflows To Github

This workflow will backup your workflows to Github. It uses the public api to export all of the workflow data using the n8n node. It then loops over the data checks in Github to see if a file exists that uses the workflow name. Once checked it will then update the file on Github if it exists, Create a new file if it doesn't exist and if it's the same it will ignore the file. Config Options repo_owner - Github owner repo_name - Github repository name repo_path - Path within the Github repository >This workflow has been updated to use the n8n node and the code node so requires at least version 0.198.0 of n8n
jon-n8n
Jonathan
HTTP Request node
+8

Scrape and store data from multiple website pages

This workflow allows extracting data from multiple pages website. The workflow: 1) Starts in a country list at https://www.theswiftcodes.com/browse-by-country/. 2) Loads every country page (https://www.theswiftcodes.com/albania/) 3) Paginates every page in the country page. 4) Extracts data from the country page. 5) Saves data to MongoDB. 6) Paginates through all pages in all countries. It uses getWorkflowStaticData('global') method to recover the next page (saved from the previous page), and it goes ahead with all the pages. There is a first section where the countries list is recovered and extracted. Later, I try to read if a local cache page is available and I recover the cached page from the disk. Finally, I save data to MongoDB, and we paginate all the pages in the country and for all the countries. I have applied a cache system to save a visited page to n8n local disk. If I relaunch workflow, we check if a cache file exists to discard non-required requests to the webpage. If the data present in the website changes, you can apply a Cron node to check the website once per week. Finally, before inserting data in MongoDB, the best way to avoid duplicates is to check that swift_code (the primary value of the collection) doesn't exist. I recommend using a proxy for all requests to avoid IP blocks. A good solution for proxy plus IP rotation is scrapoxy.io. This workflow is perfect for small data requirements. If you need to scrape dynamic data, you can use a Headless browser or any other service. If you want to scrape huge lists of URIs, I recommend using Scrapy + Scrapoxy.
mcolomer
Miquel Colomer
Google Sheets node
HTTP Request node
Item Lists node
+5

Google Maps Scraper

This workflow allows to scrape Google Maps data in an efficient way using SerpAPI. You'll get all data from Gmaps at a cheaper cost than Google Maps API. Add as input, your Google Maps search URL and you'll get a list of places with many data points such as: phone number website rating reviews address And much more. Full guide to implement the workflow is here: https://lempire.notion.site/Scrape-Google-Maps-places-with-n8n-b7f1785c3d474e858b7ee61ad4c21136?pvs=4
lucasperret
Lucas Perret
HTTP Request node
Merge node
+13

AI Agent To Chat With Files In Supabase Storage

Video Guide I prepared a detailed guide explaining how to set up and implement this scenario, enabling you to chat with your documents stored in Supabase using n8n. Youtube Link Who is this for? This workflow is ideal for researchers, analysts, business owners, or anyone managing a large collection of documents. It's particularly beneficial for those who need quick contextual information retrieval from text-heavy files stored in Supabase, without needing additional services like Google Drive. What problem does this workflow solve? Manually retrieving and analyzing specific information from large document repositories is time-consuming and inefficient. This workflow automates the process by vectorizing documents and enabling AI-powered interactions, making it easy to query and retrieve context-based information from uploaded files. What this workflow does The workflow integrates Supabase with an AI-powered chatbot to process, store, and query text and PDF files. The steps include: Fetching and comparing files to avoid duplicate processing. Handling file downloads and extracting content based on the file type. Converting documents into vectorized data for contextual information retrieval. Storing and querying vectorized data from a Supabase vector store. File Extraction and Processing: Automates handling of multiple file formats (e.g., PDFs, text files), and extracts document content. Vectorized Embeddings Creation: Generates embeddings for processed data to enable AI-driven interactions. Dynamic Data Querying: Allows users to query their document repository conversationally using a chatbot. Setup N8N Workflow Fetch File List from Supabase: Use Supabase to retrieve the stored file list from a specified bucket. Add logic to manage empty folder placeholders returned by Supabase, avoiding incorrect processing. Compare and Filter Files: Aggregate the files retrieved from storage and compare them to the existing list in the Supabase files table. Exclude duplicates and skip placeholder files to ensure only unprocessed files are handled. Handle File Downloads: Download new files using detailed storage configurations for public/private access. Adjust the storage settings and GET requests to match your Supabase setup. File Type Processing: Use a Switch node to target specific file types (e.g., PDFs or text files). Employ relevant tools to process the content: For PDFs, extract embedded content. For text files, directly process the text data. Content Chunking: Break large text data into smaller chunks using the Text Splitter node. Define chunk size (default: 500 tokens) and overlap to retain necessary context across chunks. Vector Embedding Creation: Generate vectorized embeddings for the processed content using OpenAI's embedding tools. Ensure metadata, such as file ID, is included for easy data retrieval. Store Vectorized Data: Save the vectorized information into a dedicated Supabase vector store. Use the default schema and table provided by Supabase for seamless setup. AI Chatbot Integration: Add a chatbot node to handle user input and retrieve relevant document chunks. Use metadata like file ID for targeted queries, especially when multiple documents are involved. Testing Upload sample files to your Supabase bucket. Verify if files are processed and stored successfully in the vector store. Ask simple conversational questions about your documents using the chatbot (e.g., "What does Chapter 1 say about the Roman Empire?"). Test for accuracy and contextual relevance of retrieved results.
lowcodingdev
Mark Shcherbakov

More HR workflow templates

Notion node
Code node
+6

Notion AI Assistant Generator

This n8n workflow template lets teams easily generate a custom AI chat assistant based on the schema of any Notion database. Simply provide the Notion database URL, and the workflow downloads the schema and creates a tailored AI assistant designed to interact with that specific database structure. Set Up Watch this quick set up video 👇 Key Features Instant Assistant Generation**: Enter a Notion database URL, and the workflow produces an AI assistant configured to the database schema. Advanced Querying**: The assistant performs flexible queries, filtering records by multiple fields (e.g., tags, names). It can also search inside Notion pages to pull relevant content from specific blocks. Schema Awareness**: Understands and interacts with various Notion column types like text, dates, and tags for accurate responses. Reference Links**: Each query returns direct links to the exact Notion pages that inform the assistant’s response, promoting transparency and easy access. Self-Validation**: The workflow has logic to check the generated assistant, and if any errors are detected, it reruns the agent to fix them. Ideal for Product Managers**: Easily access and query product data across Notion databases. Support Teams**: Quickly search through knowledge bases for precise information to enhance support accuracy. Operations Teams**: Streamline access to HR, finance, or logistics data for fast, efficient retrieval. Data Teams**: Automate large dataset queries across multiple properties and records. How It Works This AI assistant leverages two HTTP request tools—one for querying the Notion database and another for retrieving data within individual pages. It’s powered by the Anthropic LLM (or can be swapped for GPT-4) and always provides reference links for added transparency.
max-n8n
Max Tkacz
HTTP Request node
Google Drive node
Google Calendar node
+9

Actioning Your Meeting Next Steps using Transcripts and AI

This n8n workflow demonstrates how you can summarise and automate post-meeting actions from video transcripts fed into an AI Agent. Save time between meetings by allowing AI handle the chores of organising follow-up meetings and invites. How it works This workflow scans for the calendar for client or team meetings which were held online. * Attempts will be made to fetch any recorded transcripts which are then sent to the AI agent. The AI agent summarises and identifies if any follow-on meetings are required. If found, the Agent will use its Calendar Tool to to create the event for the time, date and place for the next meeting as well as add known attendees. Requirements Google Calendar and the ability to fetch Meeting Transcripts (There is a special OAuth permission for this action!) OpenAI account for access to the LLM. Customising the workflow This example only books follow-on meetings but could be extended to generate reports or send emails.
jimleuk
Jimleuk
Notion node
OpenAI Chat Model node
+3

Notion knowledge base AI assistant

Who is this for This workflow is perfect for teams and individuals who manage extensive data in Notion and need a quick, AI-powered way to interact with their databases. If you're looking to streamline your knowledge management, automate searches, and get faster insights from your Notion databases, this workflow is for you. It’s ideal for support teams, project managers, or anyone who needs to query specific data across multiple records or within individual pages of their Notion setup. Check out the Notion template this Assistant is set up to use: https://www.notion.so/templates/knowledge-base-ai-assistant-with-n8n How it works The Notion Database Assistant uses an AI Agent built with Retrieval-Augmented Generation (RAG) to query this Knowledge Base style Notion database. The assistant can search across multiple properties like tags or question and retrieves content from inside individual Notion pages for additional context. Key features include: Querying the database with flexible filters. Searching within individual Notion pages and extracting relevant blocks. Providing a reference link to the exact Notion pages used to inform its responses, ensuring transparency and easy verification. This assistant uses two HTTP request tools—one for querying the Notion database and another for pulling data from within specific pages. It streamlines knowledge retrieval, offering a conversational, AI-driven way to interact with large datasets. Set up Find basic set up instructions inside the workflow itself or watch a quickstart video 👇
max-n8n
Max Tkacz
HTTP Request node
Google Drive node
+4

CV Resume PDF Parsing with Multimodal Vision AI

This n8n workflow demonstrates how we can use Multimodal LLMs to parse and extract from PDF documents in n8n. In this particular scenario, we're passing a candidate's CV/resume to an AI which filters out unqualified applications. However, this sneaky candidate has added in hidden prompt to bypass our bot! Whatever will we do? No fret, using AI Vision is one approach to solve this problem... read on! How it works Our candidate's CV/Resume is a PDF downloaded via Google Drive for this demonstration. The PDF is then converted into an image PNG using a tool called Stirling PDF. Since the hidden prompt has a white font color, it is is invisible in the converted image. The image is then forwarded to a Basic LLM node to process using our multimodal model - in this example, we'll use Google's Gemini 1.5 Pro. In the Basic LLM node, we'll need to set a User Message with the type of Binary. This allows us to directly send the image file in our request. The LLM is now immune to the hidden prompt and its response is has expected. The example CV/Resume with hidden prompt can be found here: https://drive.google.com/file/d/1MORAdeev6cMcTJBV2EYALAwll8gCDRav/view?usp=sharing Requirements Google Gemini API Key. Alternatively, GPT4 will also work for this use-case. Stirling PDF or another service which can convert PDFs into images. Note for data privacy, this example uses a public API and it is recommended that you self-host and use a private instance of Stirling PDF instead. Customising the workflow Swap out the manual trigger for another trigger such as a webhook to integrate into your existing services. This example demonstrates a validation use-case ie. "does the candidate look qualified?". You can try additionally extracting data points instead such as years of experiences, previous companies etc.
jimleuk
Jimleuk
HTTP Request node
Extract from File node

CV Screening with OpenAI

Video Guide I prepared a detailed guide that showed the whole process of building a resume analyzer. Who is this for? This workflow is ideal for recruitment agencies, HR professionals, and hiring managers looking to automate the initial screening of CVs. It is especially useful for organizations handling large volumes of applications and seeking to streamline their recruitment process. What problem does this workflow solve? Manually screening resumes is time-consuming and prone to human error. This workflow automates the process, providing consistent and objective analysis of CVs against job descriptions. It helps filter out unsuitable candidates early, reducing workload and improving the overall efficiency of the recruitment process. What this workflow does This workflow automates the resume screening process using OpenAI for analysis. It provides a matching score, a summary of candidate suitability, and key insights into why the candidate fits (or doesn’t fit) the job. Retrieve Resume: The workflow downloads CVs from a direct link (e.g., Supabase storage or Dropbox). Extract Data: Extracts text data from PDF or DOC files for analysis. Analyze with OpenAI: Sends the extracted data and job description to OpenAI to: Generate a matching score. Summarize candidate strengths and weaknesses. Provide actionable insights into their suitability for the job. Setup Preparation Create Accounts: N8N: For workflow automation. OpenAI: For AI-powered CV analysis. Get CV Link: Upload CV files to Supabase storage or Dropbox to generate a direct link for processing. Prepare Artifacts for OpenAI: Define Metrics: Identify the metrics you want from the analysis (e.g., matching percentage, strengths, weaknesses). Generate JSON Schema: Use OpenAI to structure responses, ensuring compatibility with your database. Write a Prompt: Provide OpenAI with a clear and detailed prompt to ensure accurate analysis. N8N Scenario Download File: Fetch the CV using its direct URL. Extract Data: Use N8N’s PDF or text extraction nodes to retrieve text from the CV. Send to OpenAI: URL: POST to OpenAI’s API for analysis. Parameters: Include the extracted CV data and job description. Use JSON Schema to structure the response. Summary This workflow provides a seamless, automated solution for CV screening, helping recruitment agencies and HR teams save time while maintaining consistency in candidate evaluation. It enables organizations to focus on the most suitable candidates, improving the overall hiring process.
lowcodingdev
Mark Shcherbakov
HTTP Request node
Telegram node
Telegram Trigger node
+12

HR & IT Helpdesk Chatbot with Audio Transcription

An intelligent chatbot that assists employees by answering common HR or IT questions, supporting both text and audio messages. This unique feature ensures employees can conveniently ask questions via voice messages, which are transcribed and processed just like text queries. How It Works Message Capture: When an employee sends a message to the chatbot in WhatsApp or Telegram (text or audio), the chatbot captures the input. Audio Transcription: For audio messages, the chatbot transcribes the content into text using an AI-powered transcription service (e.g., Whisper, Google Cloud Speech-to-Text). Query Processing: The transcribed text (or directly entered text) is sent to an AI service (e.g., OpenAI) to generate embeddings. These embeddings are used to search a vector database (e.g., Supabase or Qdrant) containing the company’s internal HR and IT documentation. The most relevant data is retrieved and sent back to the AI service to compose a concise and helpful response. Response Delivery: The chatbot sends the final response back to the employee, whether the input was text or audio. Set Up Steps Estimated Time**: 20–25 minutes Prerequisites**: Create an account with an AI provider (e.g., OpenAI). Connect WhatsApp or Telegram credentials in n8n. Set up a transcription service (e.g., Whisper or Google Cloud Speech-to-Text). Configure a vector database (e.g., Supabase or Qdrant) and add your internal HR and IT documentation. Import the workflow template into n8n and update environment variables for your credentials.
occult
Felipe Braga

More AI workflow templates

OpenAI Chat Model node
SerpApi (Google Search) node

AI agent chat

This workflow employs OpenAI's language models and SerpAPI to create a responsive, intelligent conversational agent. It comes equipped with manual chat triggers and memory buffer capabilities to ensure seamless interactions. To use this template, you need to be on n8n version 1.50.0 or later.
n8n-team
n8n Team
HTTP Request node
Merge node
+7

Scrape and summarize webpages with AI

This workflow integrates both web scraping and NLP functionalities. It uses HTML parsing to extract links, HTTP requests to fetch essay content, and AI-based summarization using GPT-4o. It's an excellent example of an end-to-end automated task that is not only efficient but also provides real value by summarizing valuable content. Note that to use this template, you need to be on n8n version 1.50.0 or later.
n8n-team
n8n Team
HTTP Request node
Markdown node
+5

AI agent that can scrape webpages

⚙️🛠️🚀🤖🦾 This template is a PoC of a ReAct AI Agent capable of fetching random pages (not only Wikipedia or Google search results). On the top part there's a manual chat node connected to a LangChain ReAct Agent. The agent has access to a workflow tool for getting page content. The page content extraction starts with converting query parameters into a JSON object. There are 3 pre-defined parameters: url** – an address of the page to fetch method** = full / simplified maxlimit** - maximum length for the final page. For longer pages an error message is returned back to the agent Page content fetching is a multistep process: An HTTP Request mode tries to get the page content. If the page content was successfuly retrieved, a series of post-processing begin: Extract HTML BODY; content Remove all unnecessary tags to recude the page size Further eliminate external URLs and IMG scr values (based on the method query parameter) Remaining HTML is converted to Markdown, thus recuding the page lengh even more while preserving the basic page structure The remaining content is sent back to an Agent if it's not too long (maxlimit = 70000 by default, see CONFIG node). NB: You can isolate the HTTP Request part into a separate workflow. Check the Workflow Tool description, it guides the agent to provide a query string with several parameters instead of a JSON object. Please reach out to Eduard is you need further assistance with you n8n workflows and automations! Note that to use this template, you need to be on n8n version 1.19.4 or later.
eduard
Eduard
HTTP Request node
WhatsApp Business Cloud node
+10

Building Your First WhatsApp Chatbot

This n8n template builds a simple WhatsApp chabot acting as a Sales Agent. The Agent is backed by a product catalog vector store to better answer user's questions. This template is intended to help introduce n8n users interested in building with WhatsApp. How it works This template is in 2 parts: creating the product catalog vector store and building the WhatsApp AI chatbot. A product brochure is imported via HTTP request node and its text contents extracted. The text contents are then uploaded to the in-memory vector store to build a knowledgebase for the chatbot. A WhatsApp trigger is used to capture messages from customers where non-text messages are filtered out. The customer's message is sent to the AI Agent which queries the product catalogue using the vector store tool. The Agent's response is sent back to the user via the WhatsApp node. How to use Once you've setup and configured your WhatsApp account and credentials First, populate the vector store by clicking the "Test Workflow" button. Next, activate the workflow to enable the WhatsApp chatbot. Message your designated WhatsApp number and you should receive a message from the AI sales agent. Tweak datasource and behaviour as required. Requirements WhatsApp Business Account OpenAI for LLM Customising this workflow Upgrade the vector store to Qdrant for persistance and production use-cases. Handle different WhatsApp message types for a more rich and engaging experience for customers.
jimleuk
Jimleuk
Merge node
Telegram node
Telegram Trigger node
+2

Telegram AI Chatbot

The workflow starts by listening for messages from Telegram users. The message is then processed, and based on its content, different actions are taken. If it's a regular chat message, the workflow generates a response using the OpenAI API and sends it back to the user. If it's a command to create an image, the workflow generates an image using the OpenAI API and sends the image to the user. If the command is unsupported, an error message is sent. Throughout the workflow, there are additional nodes for displaying notes and simulating typing actions.
eduard
Eduard
Google Drive node
Binary Input Loader node
Embeddings OpenAI node
OpenAI Chat Model node
+5

Ask questions about a PDF using AI

The workflow first populates a Pinecone index with vectors from a Bitcoin whitepaper. Then, it waits for a manual chat message. When received, the chat message is turned into a vector and compared to the vectors in Pinecone. The most similar vectors are retrieved and passed to OpenAI for generating a chat response. Note that to use this template, you need to be on n8n version 1.19.4 or later.
davidn8n
David Roberts

Implement complex processes faster with n8n

red icon yellow icon red icon yellow icon
;