Back to Integrations
integration integration
integration Split Out node

Integrate Split Out with 500+ apps and services

Unlock Split Outโ€™s full potential with n8n, connecting it to similar Core Nodes apps and over 1000 other services. Create adaptable and scalable workflows between Split Out and your stack. All within a building experience you will love.

Popular ways to use Split Out integration

HTTP Request node
+9

Save Qualys Reports to TheHive

Automate Report Generation with n8n & Qualys Introducing the Save Qualys Reports to TheHive Workflowโ€”a robust solution designed to automate the retrieval and storage of Qualys reports in TheHive. This workflow fetches reports from Qualys, filters out already processed reports, and creates cases in TheHive for the new reports. It runs every hour to ensure continuous monitoring and up-to-date vulnerability management, making it ideal for Security Operations Centers (SOCs). How It Works: Set Global Variables:** Initializes necessary global variables like base_url and newtimestamp. This step ensures that the workflow operates with the correct configuration and up-to-date timestamps. Ensure to change the Global Variables to match your environment. Fetch Reports from Qualys:** Sends a GET request to the Qualys API to retrieve finished reports. Automating this step ensures timely updates and consistent data retrieval. Convert XML to JSON:** Converts the XML response to JSON format for easier data manipulation. This transformation simplifies further processing and integration into TheHive. Filter Reports:** Checks if the reports have already been processed using their creation timestamps. This filtering ensures that only new reports are handled, avoiding duplicates. Process Each Report:** Loops through the list of new reports, ensuring each is processed individually. This step-by-step handling prevents issues related to bulk processing and improves reliability. Create Case in TheHive:** Generates a new case in TheHive for each report, serving as a container for the report data. Automating case creation improves efficiency and ensures that all relevant data is captured. Download and Attach Report:** Downloads the report from Qualys and attaches it to the respective case in TheHive. This automation ensures that all data is properly archived and easily accessible for review. Get Started: Ensure your Qualys and TheHive integrations are properly set up. Customize the workflow to fit your specific vulnerability management needs. Need Help? Join the discussion on our Forum or check out resources on Discord! Deploy this workflow to streamline your vulnerability management process, improve response times, and enhance the efficiency of your security operations.
djangelic
Angel Menendez
Google Sheets node
HTTP Request node
+12

Survey Insights with Qdrant, Python and Information Extractor

This n8n template is one of a 3-part series exploring use-cases for clustering vector embeddings: Survey Insights Customer Insights Community Insights This template demonstrates the Survey Insights scenario where survey participant responses can be quickly grouped by similarity and an AI agent can generate insights on those groupings. With this workflow, researchers can save days and even weeks of work breaking down cohorts of participants and identify frequently mentioned positives and negatives. Sample Output: https://docs.google.com/spreadsheets/d/e/2PACX-1vT6m8XH8JWJTUAfwojc68NAUGC7q0lO7iV738J7aO5fuVjiVzdTRRPkMmT1C4N8TwejaiT0XrmF1Q48/pubhtml# How it works All survey questions and responses are imported from a Google Sheet. Responses are then inserted into a Qdrant collection carefully tagged with the question and survey metadata. For each question, all relevant response are put through a clustering algorithm using the Python Code node. The Qdrant points are returned in clustered groups. Each group is looped to fetch the payloads of the points and feed them to the AI agent to summarise and generate insights for. The resulting insights and raw responses are then saved to the Google Spreadsheet for further analysis by the researcher. Requirements Survey data and format as shown in the attached google sheet. Qdrant Vectorstore for storing embeddings. OpenAI account for embeddings and LLM. Customising the Template Adjust clustering parameters which make sense for your data. Add more clusters for open-ended questions and less clusters when responses are multiple choice.
jimleuk
Jimleuk
HTTP Request node
Merge node
Ghost node
+9

Research AI Agent Team with auto citations using OpenRouter and Perplexity

Purpose of workflow: This AI-powered workflow is designed to automatically generate comprehensive, well-researched articles on any given topic. It utilizes a team of AI agents to streamline the research and writing process, producing high-quality content with proper citations and credible sources. How it works Multi-agent team: Research Leader: Plans and conducts initial research, creating a table of contents. Project Planner: Breaks down the table of contents into manageable sections. Research Assistants: Multiple agents that conduct in-depth research on assigned sections. Editor: Compiles and refines the final article, ensuring coherence and proper citations. Key features: Utilizes Perplexity AI for internet search and citation capabilities Produces well-structured articles with proper citations Customizable parameters (topic, tone, word count, number of sections) Step by step setup: Get account from OpenRouter.ai to access Perplexity API Set API key in the Perplexity API node Credential key name : Authorization and key value Bearer <api-key value>
derekcheungsa
Derek Cheung
Google Sheets node
Postgres node
Compare Datasets node
Split Out node

Synchronize your Google Sheets with Postgres

Sync your Google Sheets Data with your Postgres database table, requiring minimal adjustments. Follow these steps: Retrieve Data: Pull data from Google Sheets and PostgreSQL. Compare Datasets: Identify differences, focusing on new or updated entries. Update PostgreSQL: Apply changes to ensure both platforms mirror each other. Automate this process to regularly synchronize data. Before starting, grant necessary access to both Google Sheets and PostgreSQL, and specify the data details for synchronization. This streamlined workflow enhances data consistency across platforms. This example is a one-way synchronization from Google Sheets into your Postgres. With small adjustments, you can make it the other way around, or 2-way.
bwiertz
Bela
Merge node
Monday.com node
+4

Retrieve a Monday.com row and all data in a single node

This workflow is a building block designed to be called from other workflows via an Execute workflow node. When called from another workflow, and given the JSON input of a "pulse" field with the ID to pull from monday, this workflow will return: The items name and ID All column data, indexable by the column name All column data, indexable by the column's ID string All board relation columns, with their data and column values All subitems, with their data and column values For example: ++Prerequisites++ A monday.com account and credential A workflow that needs to get detailed data from a monday.com row The pulse id of the monday.com row to retreive data from. ++Setup++ Import the workflow Configure all monday nodes with your credentials and save the workflow Copy the workflow ID from it's URL In a different workflow, add an Edit Fields node, to output the field "pulse", with the monday item you want to retrieve. Feed the Edit Fields node with your pulse into an Execute workflow node, and paste the workflow ID from above into it This "pulse" field will tell the workflow what pulse to retreive. This can be populated by an expression in your workflow There is an example of the Edit Fields and Execute Workflow nodes in the template
jdana
Joey Dโ€™Anna
HTTP Request node
Postgres node
Slack node
+5

Enrich up to 1500 emails per hour with Dropcontact batch requests

The template allows to make Dropcontact batch requests up to 250 requests every 10 minutes (1500/hour). Valuable if high volume email enrichment is expected. Dropcontact will look for email & basic email qualification if first_name, last_name, company_name is provided. +++++++++++++++++++++++++++++++++++++++++ Step 1: Node "Profiles Query" Connect your own source (Airtable, Google Sheets, Supabase,...) the template is using Postgres by default. Note I: Be careful your source is only returning a maximum of 250 items. Note II: The next node uses the next variables, make sure you can map these from your source file: first_name last_name website (company_name would work too) full_name (see note) Note III: This template is using the Dropcontact Batch API, which works in a POST & GET setup. Not a GET request only to retrieve data, as Dropcontact needs to process the batch data load properly. +++++++++++++++++++++++++++++++++++++++++ Step 2: Node "Data Transformation" Will transform the input variables in the proper json format. This json format is expected from the Dropcontact API to make a batch request. "full_name" is being used as a custom identifier to update the returned email to the proper contact in your source database. To make things easy, use a unique identiefer in the full_name variable. +++++++++++++++++++++++++++++++++++++++++ Step3: Node: "Bulk Dropcontact Requests". Enter your Dropcontact credentials in the node: Bulk Dropcontact Requests. +++++++++++++++++++++++++++++++++++++++++ Step4: Connect your output source by mapping the data you like to use. +++++++++++++++++++++++++++++++++++++++++ Step5: Node: "Slack" (OPTIONAL) Connect your slack account, if an error occur, you will be notified. TIP: Try to run the workflow with a batch of 10 (not 250) as it might need to run initially before you will be able to map the data to your final destination. Once the data fields are properly mapped, adjust back to 250.
vliegendepater
victor de coster

Over 3000 companies switch to n8n every single week

Connect Split Out with your companyโ€™s tech stack and create automation workflows

in other news I installed @n8n_io tonight and holy moly itโ€™s good

itโ€™s compatible with EVERYTHING

Last week I automated much of the back office work for a small design studio in less than 8hrs and I am still mind-blown about it.

n8n is a game-changer and should be known by all SMBs and even enterprise companies.

We're using the @n8n_io cloud for our internal automation tasks since the beta started. It's awesome! Also, support is super fast and always helpful. ๐Ÿค—

Implement complex processes faster with n8n

red icon yellow icon red icon yellow icon