Back to Integrations
integration integration
integration

Integrate LangChain Call n8n Workflow Tool in your LLM apps and 422+ apps and services

Use Call n8n Workflow Tool to easily build AI-powered applications with LangChain and integrate them with 422+ apps and services. n8n lets you seamlessly import data from files, websites, or databases into your LLM-powered application and create automated scenarios.

Popular ways to use Call n8n Workflow Tool integration

Google Sheets node
HTTP Request node
Webhook node
+7

Enrich Company Data from Google Sheet with OpenAI Agent and ScrapingBee

This workflow demonstrates how to enrich data from a list of companies in a spreadsheet. While this workflow is production-ready if all steps are followed, adding error handling would enhance its robustness. Important notes Check legal regulations**: This workflow involves scraping, so make sure to check the legal regulations around scraping in your country before getting started. Better safe than sorry! Mind those tokens**: OpenAI tokens can add up fast, so keep an eye on usage unless you want a surprising bill that could knock your socks off! 💸 Main Workflow Node 1 - Webhook This node triggers the workflow via a webhook call. You can replace it with any other trigger of your choice, such as form submission, a new row added in Google Sheets, or a manual trigger. Node 2 - Get Rows from Google Sheet This node retrieves the list of companies from your spreadsheet. here is the Google Sheet Template you can use. The columns in this Google Sheet are: Company**: The name of the company Website**: The website URL of the company These two fields are required at this step. Business Area**: The business area deduced by OpenAI from the scraped data Offer**: The offer deduced by OpenAI from the scraped data Value Proposition**: The value proposition deduced by OpenAI from the scraped data Business Model**: The business model deduced by OpenAI from the scraped data ICP**: The Ideal Customer Profile deduced by OpenAI from the scraped data Additional Information**: Information related to the scraped data, including: Information Sufficiency: Description: Indicates if the information was sufficient to provide a full analysis. Options: "Sufficient" or "Insufficient" Insufficient Details: Description: If labeled "Insufficient," specifies what information was missing or needed to complete the analysis. Mismatched Content: Description: Indicates whether the page content aligns with that of a typical company page. Suggested Actions: Description: Provides recommendations if the page content is insufficient or mismatched, such as verifying the URL or searching for alternative sources. Node 3 - Loop Over Items This node ensures that, in subsequent steps, the website in "extra workflow input" corresponds to the row being processed. You can delete this node, but you'll need to ensure that the "query" sent to the scraping workflow corresponds to the website of the specific company being scraped (rather than just the first row). Node 4 - AI Agent This AI agent is configured with a prompt to extract data from the content it receives. The node has three sub-nodes: OpenAI Chat Model: The model used is currently gpt4-o-mini. Call n8n Workflow: This sub-node calls the workflow to use ScrapingBee and retrieves the scraped data. Structured Output Parser: This parser structures the output for clarity and ease of use, and then adds rows to the Google Sheet. Node 5 - Update Company Row in Google Sheet This node updates the specific company's row in Google Sheets with the enriched data. Scraper Agent Workflow Node 1 - Tool Called from Agent This is the trigger for when the AI Agent calls the Scraper. A query is sent with: Company name Website (the URL of the website) Node 2 - Set Company URL This node renames a field, which may seem trivial but is useful for performing transformations on data received from the AI Agent. Node 3 - ScrapingBee: Scrape Company's Website This node scrapes data from the URL provided using ScrapingBee. You can use any scraper of your choice, but ScrapingBee is recommended, as it allows you to configure scraper behavior directly. Once configured, copy the provided "curl" command and import it into n8n. Node 4 - HTML to Markdown This node converts the scraped HTML data to Markdown, which is then sent to OpenAI. The Markdown format generally uses fewer tokens than HTML. Improving the Workflow It's always a pleasure to share workflows, but creators sometimes want to keep some magic to themselves ✨. Here are some ways you can enhance this workflow: Handle potential errors Configure the scraper tool to scrape other pages on the website. Although this will cost more tokens, it can be useful (e.g., scraping "Pricing" or "About Us" pages in addition to the homepage). Instead of Google Sheets, connect directly to your CRM to enrich company data. Trigger the workflow from form submissions on your website and send the scraped data about the lead to a Slack or Teams channel.
dataki
Dataki
Google Calendar node
Gmail node
Item Lists node
+9

Suggest meeting slots using AI

The purpose of this n8n workflow is to automate the process of identifying incoming Gmail emails that are requesting an appointment, evaluating their content, checking calendar availability, and then composing and sending a response email. Note that to use this template, you need to be on n8n version 1.19.4 or later.
n8n-team
n8n Team
Slack node
Code node
+5

Ask a human for help when the AI doesn't know the answer

This is a workflow that tries to answer user queries using the standard GPT-4 model. If it can't answer, it sends a message to Slack to ask for human help. It prompts the user to supply an email address. This workflow is used in Advanced AI examples | Ask a human in the documentation. To use this workflow: Load it into your n8n instance. Add your credentials as prompted by the notes. Configure the Slack node to use your Slack details, or swap out Slack for a different service.
deborah
Deborah
OpenAI Chat Model node

MongoDB AI Agent - Intelligent Movie Recommendations

Who is this for? This workflow is designed for: Database administrators and developers working with MongoDB Content managers handling movie databases Organizations looking to implement AI-powered search and recommendation systems Developers interested in combining LangChain, OpenAI, and MongoDB capabilities What problem does this workflow solve? Traditional database queries can be complex and require specific MongoDB syntax knowledge. This workflow addresses: The complexity of writing MongoDB aggregation pipelines The need for natural language interaction with movie databases The challenge of maintaining user preferences and favorites The gap between AI language models and database operations What this workflow does This workflow creates an intelligent agent that: Accepts natural language queries about movies Translates user requests into MongoDB aggregation pipelines Queries a movie database containing detailed information including: Plot summaries Genre classifications Cast and director information Runtime and release dates Ratings and awards Provides contextual responses using OpenAI's language model Allows users to save favorite movies to the database Maintains conversation context using a window buffer memory Setup Required Credentials: OpenAI API credentials MongoDB connection details Node Configuration: Configure the MongoDB connection in the MongoDBAggregate node Set up the OpenAI Chat Model with your API key Ensure the webhook trigger is properly configured for receiving chat messages Database Requirements: A MongoDB collection named "movies" with the specified document structure Proper indexes for efficient querying Appropriate user permissions for read/write operations How to customize this workflow Modify the Document Structure: Update the tool description in the MongoDBAggregate node to match your collection schema Adjust the aggregation pipeline templates for your specific use case Enhance the AI Agent: Customize the prompt in the "AI Agent - Movie Recommendation" node Modify the window buffer memory size based on your context needs Add additional tools for more functionality Extend Functionality: Add more MongoDB operations beyond aggregation Implement additional workflows for different types of queries Create custom error handling and validation Add user authentication and rate limiting Integration Options: Connect to external APIs for additional movie data Add webhook endpoints for different platforms Implement caching mechanisms for frequent queries Add data transformation nodes for specific output formats This workflow serves as a foundation that can be adapted to various use cases beyond movie recommendations, such as e-commerce product search, content management systems, or any scenario requiring intelligent database interaction.
pash
Pavel Duchovny
HTTP Request node
Merge node
Ghost node
+9

Research AI Agent Team with auto citations using OpenRouter and Perplexity

Purpose of workflow: This AI-powered workflow is designed to automatically generate comprehensive, well-researched articles on any given topic. It utilizes a team of AI agents to streamline the research and writing process, producing high-quality content with proper citations and credible sources. How it works Multi-agent team: Research Leader: Plans and conducts initial research, creating a table of contents. Project Planner: Breaks down the table of contents into manageable sections. Research Assistants: Multiple agents that conduct in-depth research on assigned sections. Editor: Compiles and refines the final article, ensuring coherence and proper citations. Key features: Utilizes Perplexity AI for internet search and citation capabilities Produces well-structured articles with proper citations Customizable parameters (topic, tone, word count, number of sections) Step by step setup: Get account from OpenRouter.ai to access Perplexity API Set API key in the Perplexity API node Credential key name : Authorization and key value Bearer <api-key value>
derekcheungsa
Derek Cheung
HTTP Request node
Webhook node
Respond to Webhook node
+7

AI Agent to chat with you Search Console Data, using OpenAI and Postgres

Edit 19/11/2024: As explained on the workflow, the AI Agent with the original system prompt was not effective when using gpt4-o-mini. To address this, I optimized the prompt to work better with this model. You can find the prompts I’ve tested on this Notion Page. And yes, there is one that works well with gpt4-o-mini. AI Agent to chat with you Search Console Data, using OpenAI and Postgres This AI Agent enables you to interact with your Search Console data through a chat interface. Each node is documented within the template, providing sufficient information for setup and usage. You will also need to configure Search Console OAuth credentials. Follow this n8n documentation to set up the OAuth credentials. Important Notes Correctly Configure Scopes for Search Console API Calls It’s essential to configure the scopes correctly in your Google Search Console API OAuth2 credentials. Incorrect configuration can cause issues with the refresh token, requiring frequent reconnections. Below is the configuration I use to avoid constant re-authentication: Of course, you'll need to add your client_id and client_secret from the Google Cloud Platform app you created to access your Search Console data. Configure Authentication for the Webhook Since the webhook will be publicly accessible, don’t forget to set up authentication. I’ve used Basic Auth, but feel free to choose the method that best meets your security requirements. 🤩💖 Example of awesome things you can do with this AI Agent
dataki
Dataki

About Call n8n Workflow Tool

Related categories

Similar integrations

  • Wikipedia node
  • OpenAI Chat Model node
  • Zep Vector Store node
  • Postgres Chat Memory node
  • Pinecone Vector Store node
  • Embeddings OpenAI node
  • Supabase: Insert node
  • OpenAI node

Over 3000 companies switch to n8n every single week

Connect Call n8n Workflow Tool with your company’s tech stack and create automation workflows

We're using the @n8n_io cloud for our internal automation tasks since the beta started. It's awesome! Also, support is super fast and always helpful. 🤗

Last week I automated much of the back office work for a small design studio in less than 8hrs and I am still mind-blown about it.

n8n is a game-changer and should be known by all SMBs and even enterprise companies.

in other news I installed @n8n_io tonight and holy moly it’s good

it’s compatible with EVERYTHING