Back to Integrations
integration integration
integration

Integrate LangChain Auto-fixing Output Parser in your LLM apps and 422+ apps and services

Use Auto-fixing Output Parser to easily build AI-powered applications with LangChain and integrate them with 422+ apps and services. n8n lets you seamlessly import data from files, websites, or databases into your LLM-powered application and create automated scenarios.

Popular ways to use Auto-fixing Output Parser integration

Airtable node
Twilio node
+7

Handling Appointment Leads and Follow-up With Twilio, Cal.com and AI

This n8n workflow builds an appointment scheduling AI agent which can Take enquiries from prospective customers and help them book an appointment by checking appointment availability Where no appointment is booked, the Agent is able to send follow-up messages to re-engage leads. After an appointment is booked, the agent is able reschedule or even cancel the booking for the user without human intervention. For small outfits, this workflow could contribute the necessary "man-power" required to increase business sales. The sample Airtable can be found here: https://airtable.com/appO2nHiT9XPuGrjN/shroSFT2yjf87XAox 2024-10-22 Updated to Cal.com API v2. How it works The customer sends an enquiry via SMS to trigger our workflow. For this trigger, we'll use a Twilio webhook. The prospective or existing customer's number is logged in an Airtable Base which we'll be using to track all our enquries. Next, the message is sent to our AI Agent who can reply to the user and decide if an appointment booking can be made. The reply is made via SMS using Twilio. A scheduled trigger which runs every day, checks our chat logs for a list of prospective customers who have yet to book an appointment but still show interest. This list is sent to our AI Agent to formulate a personalised follow-up message to each lead and ask them if they want to continue with the booking. The follow-up interaction is logged so as to not to send too many messages to the customer. Requirements A Twilio account to receive customer messages. An Airtable account and Base to use as our datastore for enquiries. Cal.com account to use as our scheduling service. OpenAI account for our AI model. Customising this workflow Not using Airtable? Swap this out for your CRM of choice such as hubspot or your own service. Not using Cal.com? Swap this out for API-enabled services such as Acuity Scheduling or your own service.
jimleuk
Jimleuk
HTTP Request node
+7

Allow your AI to call an API to fetch data

Use n8n to bring data from any API to your AI. This workflow uses the Chat Trigger to provide the chat interface, and the Custom n8n Workflow Tool to call a second workflow that calls the API. The second workflow uses AI functionality to refine the API request based on the user's query. It then makes an API call, and returns the response to the main workflow. This workflow is used in Advanced AI examples | Call an API to fetch data in the documentation. To use this workflow: Load it into your n8n instance. Add your credentials as prompted by the notes. Requires n8n 1.28.0 or above
deborah
Deborah
Notion node
Code node
+6

Notion AI Assistant Generator

This n8n workflow template lets teams easily generate a custom AI chat assistant based on the schema of any Notion database. Simply provide the Notion database URL, and the workflow downloads the schema and creates a tailored AI assistant designed to interact with that specific database structure. Set Up Watch this quick set up video 👇 Key Features Instant Assistant Generation**: Enter a Notion database URL, and the workflow produces an AI assistant configured to the database schema. Advanced Querying**: The assistant performs flexible queries, filtering records by multiple fields (e.g., tags, names). It can also search inside Notion pages to pull relevant content from specific blocks. Schema Awareness**: Understands and interacts with various Notion column types like text, dates, and tags for accurate responses. Reference Links**: Each query returns direct links to the exact Notion pages that inform the assistant’s response, promoting transparency and easy access. Self-Validation**: The workflow has logic to check the generated assistant, and if any errors are detected, it reruns the agent to fix them. Ideal for Product Managers**: Easily access and query product data across Notion databases. Support Teams**: Quickly search through knowledge bases for precise information to enhance support accuracy. Operations Teams**: Streamline access to HR, finance, or logistics data for fast, efficient retrieval. Data Teams**: Automate large dataset queries across multiple properties and records. How It Works This AI assistant leverages two HTTP request tools—one for querying the Notion database and another for retrieving data within individual pages. It’s powered by the Anthropic LLM (or can be swapped for GPT-4) and always provides reference links for added transparency.
max-n8n
Max Tkacz
OpenAI Model node

Force AI to use a specific output format

This workflow is for anyone looking to automatically fetch, validate, and parse complex language-based queries into a structured format. Its unique capability lies in not only processing language but also fixing invalid outputs before structuring them. Note that to use this template, you need to be on n8n version 1.19.4 or later.
n8n-team
n8n Team
HTTP Request node
Merge node
+10

Extract data from resume and create PDF with Gotenberg

With this workflow you can extract data from resume documents uploaded via a Telegram bot. Workflow transform readable content of PDF resume into structured data, using AI nodes and returns PDF with formatted, plain HTML. You can modify this workflow to perform other actions with structured data (e.g. insert it into database or create other, well-formatted documents). Functionality of this workflow was presented during the n8n community call on March 7, 2024 - recording of presentation available here. ⚠️ Workflow made for demo purposes. If you want to use it in real life, please make sure necessary measures for personal data protection are set. How it works? User uploads readable PDF resume document into Telegram bot. After authentication based on chat ID parameter, workflow extracts text from the PDF and transfers it into AI chain with connected sub-nodes: OpenAI Chat Model and Structured Output (JSON) Parser. Then, each extracted section (employment history, projects etc.) is formatted into desired HTML structure. Finally, the document is converted into new, structured PDF using Gotenberg. 💡 This workflow requires installed Gotenberg. If you are not familiar with this software, please have a look on my YouTube tutorial. You can also replace call to Gotenberg with other PDF generation service (such as PDFMonkey or ApiTemplate). Set up steps Create Telegram bot and add its credentials in n8n. Set your chat ID parameter in Auth node. Adjust JSON schema in Structured Output Parser according to your needs. Optionally: replace HTTP call to Gotenberg with PDF generation service of your choice. If you like this workflow, please subscribe to my YouTube channel and/or my newsletter.
workfloows
Oskar

About Auto-fixing Output Parser

Related categories

Similar integrations

  • Wikipedia node
  • OpenAI Chat Model node
  • Zep Vector Store node
  • Postgres Chat Memory node
  • Pinecone Vector Store node
  • Embeddings OpenAI node
  • Supabase: Insert node
  • OpenAI node

Over 3000 companies switch to n8n every single week

Connect Auto-fixing Output Parser with your company’s tech stack and create automation workflows

We're using the @n8n_io cloud for our internal automation tasks since the beta started. It's awesome! Also, support is super fast and always helpful. 🤗

Last week I automated much of the back office work for a small design studio in less than 8hrs and I am still mind-blown about it.

n8n is a game-changer and should be known by all SMBs and even enterprise companies.

in other news I installed @n8n_io tonight and holy moly it’s good

it’s compatible with EVERYTHING