Back to Integrations
integration integration
integration Ollama Chat Model node

Integrate Ollama Chat Model in your LLM apps and 422+ apps and services

Use Ollama Chat Model to easily build AI-powered applications and integrate them with 422+ apps and services. n8n lets you seamlessly import data from files, websites, or databases into your LLM-powered application and create automated scenarios.

Popular ways to use Ollama Chat Model integration

Ollama Chat Model node

Chat with local LLMs using n8n and Ollama

Chat with local LLMs using n8n and Ollama This n8n workflow allows you to seamlessly interact with your self-hosted Large Language Models (LLMs) through a user-friendly chat interface. By connecting to Ollama, a powerful tool for managing local LLMs, you can send prompts and receive AI-generated responses directly within n8n. Use cases Private AI Interactions Ideal for scenarios where data privacy and confidentiality are important. Cost-Effective LLM Usage Avoid ongoing cloud API costs by running models on your own hardware. Experimentation & Learning A great way to explore and experiment with different LLMs in a local, controlled environment. Prototyping & Development Build and test AI-powered applications without relying on external services. How it works When chat message received: Captures the user's input from the chat interface. Chat LLM Chain: Sends the input to the Ollama server and receives the AI-generated response. Delivers the LLM's response back to the chat interface. Set up steps Make sure Ollama is installed and running on your machine before executing this workflow. Edit the Ollama address if different from the default.
mihailtd
Mihai Farcas
Merge node
+7

Auto Categorise Outlook Emails with AI

Automate your email management with this workflow, designed for freelancers and business professionals who receive high volumes of emails. By leveraging AI-powered categorisation and dynamic email processing, this template helps you organise your inbox and streamline communication for better efficiency and productivity. Check out the YouTube video for step-by-step set up instructions! How it works: Fetch & Filter Emails: The workflow retrieves emails from your Microsoft Outlook account, filtering out flagged emails and those already categorised. Content Preparation: Each email is cleaned up and converted to a structured format using Markdown, making it easier for AI processing. AI Categorization: The content is analysed using an AI model, which categorises the emails into predefined categories (e.g., Action, Junk, Business, SaaS) based on the context and content. Email Categorization & Folder Management: The categorised emails are updated in Microsoft Outlook and moved to respective folders such as "Junk Email" or "Receipts" based on the AI's classification. Conditional Processing & Final Checks: Additional checks and conditions ensure that only unread emails are processed, and errors are gracefully managed to maintain workflow stability. Set up steps: Connect Microsoft Outlook: Link your Microsoft Outlook account using the built-in credentials node to enable email fetching, updating, and folder management. Configure AI Model (Ollama API): Set up the AI model by connecting to the Ollama API and choosing your desired language model for categorisation. Modify Email Categories (Optional): Customize the categories and subcategories within the workflow to suit your unique email management needs. Set Up Error Handling: Review the error handling node settings to ensure smooth workflow execution. This template offers a robust solution for managing and organising your inbox, helping you save time and keep your focus on important emails.
nocodecreative
Wayne Simpson
Ollama Chat Model node
+3

Extract personal data with self-hosted LLM Mistral NeMo

This workflow shows how to use a self-hosted Large Language Model (LLM) with n8n's LangChain integration to extract personal information from user input. This is particularly useful for enterprise environments where data privacy is crucial, as it allows sensitive information to be processed locally. πŸ“– For a detailed explanation and more insights on using open-source LLMs with n8n, take a look at our comprehensive guide on open-source LLMs. πŸ”‘ Key Features Local LLM Connect Ollama to run Mistral NeMo LLM locally Provide a foundation for compliant data processing, keeping sensitive information on-premises Data extraction Convert unstructured text to a consistent JSON format Adjust the JSON schema to meet your specific data extraction needs. Error handling Implement auto-fixing for LLM outputs Include error output for further processing βš™οΈ Setup and сonfiguration Prerequisites n8n AI Starter Kit installed Configuration steps Add the Basic LLM Chain node with system prompts. Set up the Ollama Chat Model with optimized parameters. Define the JSON schema in the Structured Output Parser node. πŸ” Further resources Run LLMs locally with n8n Video tutorial on using local AI with n8n Apply the power of self-hosted LLMs in your n8n workflows while maintaining control over your data processing pipeline!
yulia
Yulia
HTTP Request node
Ollama Chat Model node
+3

πŸ‹DeepSeek V3 Chat & R1 Reasoning Quick Start

This n8n workflow demonstrates multiple ways to harness DeepSeek's AI models in your automation pipeline! 🌟 Core Features Multiple Integration Methods πŸ”Œ Local deployment using Ollama for DeepSeek-R1 Direct API integration with DeepSeek Chat V3 Conversational agent with memory buffer HTTP request implementation with both raw and JSON formats Model Options 🧠 DeepSeek Chat V3 for general conversation DeepSeek-R1 for advanced reasoning Memory-enabled agent for persistent context Quick Setup πŸ› οΈ API Configuration Base URL: https://api.deepseek.com Get your API key from platform.deepseek.com/api_keys Local Setup πŸ’» Install Ollama for local deployment Set up DeepSeek-R1 via Ollama Configure local credentials in n8n Implementation Details πŸ”§ Conversational Agent Window Buffer Memory for context Customizable system messages Built-in error handling with retries API Endpoints 🌐 Chat completions for V3 and R1 models OpenAI API format compatibles
joe
Joseph LePage
GitHub node
+5

Fetch Dynamic Prompts from GitHub and Auto-Populate n8n Expressions in Prompt

Who Is This For? This workflow is designed for AI engineers, automation specialists, and content creators who need a scalable system to dynamically manage prompts stored in GitHub. It eliminates manual updates, enforces required variable checks, and ensures that AI interactions always receive fully processed prompts. πŸš€ What Problem Does This Solve? Manually managing AI prompts can be inefficient and error-prone. This workflow: βœ… Fetches dynamic prompts from GitHub βœ… Auto-populates placeholders with values from the setVars node βœ… Ensures all required variables are present before execution βœ… Processes the formatted prompt through an AI agent πŸ›  How This Workflow Works This workflow consists of three key branches, ensuring smooth prompt retrieval, variable validation, and AI processing. 1️⃣ Retrieve the Prompt from GitHub (HTTP Request β†’ Extract from File β†’ SetPrompt) The workflow starts manually or via an external trigger. It fetches a text-based prompt stored in a GitHub repository. The Extract from File Node retrieves the content from the GitHub file. The SetPrompt Node stores the prompt, making it accessible for processing. πŸ“Œ Note: The prompt must contain n8n expression format variables (e.g., {{ $json.company }}) so they can be dynamically replaced. 2️⃣ Extract & Auto-Populate Variables (Check All Prompt Vars β†’ Replace Variables) A Code Node scans the prompt for placeholders in the n8n expression format ({{ $json.variableName }}). The workflow compares required variables against the setVars node: βœ… If all variables are present, it proceeds to variable replacement. ❌ If any variables are missing, the workflow stops and returns an error listing them. The Replace Variables Node replaces all placeholders with values from setVars. πŸ“Œ Example of a properly formatted GitHub prompt: Hello {{ $json.company }}, your product {{ $json.features }} launches on {{ $json.launch_date }}. This ensures seamless replacement when processed in n8n. 3️⃣ AI Processing & Output (AI Agent β†’ Prompt Output) The Set Completed Prompt Node stores the final, processed prompt. The AI Agent Node (Ollama Chat Model) processes the prompt. The Prompt Output Node returns the fully formatted response. πŸ“Œ Optional: Modify this to use OpenAI, Claude, or other AI models. ⚠️ Error Handling: Missing Variables If a required variable is missing, the workflow stops execution and provides an error message: ⚠️ Missing Required Variables: ["launch_date"] This ensures no incomplete prompts are sent to AI agents. βœ… Example Use Case πŸ“œ GitHub Prompt File (Using n8n Expressions) Hello {{ $json.company }}, your product {{ $json.features }} launches on {{ $json.launch_date }}. πŸ”Ή Variables in setVars Node { "company": "PropTechPro", "features": "AI-powered Property Management", "launch_date": "March 15, 2025" } βœ… Successful Output Hello PropTechPro, your product AI-powered Property Management launches on March 15, 2025. 🚨 Error Output (If Missing launch_date) ⚠️ Missing Required Variables: ["launch_date"] πŸ”§ Setup Instructions 1️⃣ Connect Your GitHub Repository Store your prompt in a public or private GitHub repo. The workflow will fetch the raw file using the GitHub API. 2️⃣ Configure the SetVars Node Define the required variables in the SetVars Node. Make sure the variable names match those used in the prompt. 3️⃣ Test & Run Click Test Workflow to execute. If variables are missing, it will show an error. If everything is correct, it will output the fully formatted prompt. ⚑ How to Customize This Workflow πŸ’‘ Need CRM or Database Integration? Connect the setVars node to an Airtable, Google Sheets, or HubSpot API to pull variables dynamically. πŸ’‘ Want to Modify the AI Model? Replace the Ollama Chat Model with OpenAI, Claude, or a custom LLM endpoint. πŸ“Œ Why Use This Workflow? βœ… No Manual Updates Required – Fetches prompts dynamically from GitHub. βœ… Prevents Broken Prompts – Ensures required variables exist before execution. βœ… Works for Any Use Case – Handles AI chat prompts, marketing messages, and chatbot scripts. βœ… Compatible with All n8n Deployments – Works on Cloud, Self-Hosted, and Desktop versions.
joeperes
RealSimple Solutions
Merge node
Code node
+7

Detect hallucinations using specialised Ollama model bespoke-minicheck

Fact-Checking Workflow Documentation Overview This workflow is designed for automated fact-checking of texts. It uses AI models to compare a given text with a list of facts and identify potential discrepancies or hallucinations. Components 1. Input The workflow can be initiated in two ways: a) Manually via the "When clicking 'Test workflow'" trigger b) By calling from another workflow via the "When Executed by Another Workflow" trigger Required inputs: facts: A list of verified facts text: The text to be checked 2. Text Preparation The "Code" node splits the input text into individual sentences Takes into account date specifications and list elements 3. Fact Checking Each sentence is individually compared with the given facts Uses the "bespoke-minicheck" Ollama model for verification The model responds with "Yes" or "No" for each sentence 4. Filtering and Aggregation Sentences marked as "No" (not fact-based) are filtered The filtered results are aggregated 5. Summary A larger language model (Qwen2.5) creates a summary of the results The summary contains: Number of incorrect factual statements List of incorrect statements Final assessment of the article's accuracy Usage Ensure the "bespoke-minicheck" model is installed in Ollama (ollama pull bespoke-minicheck) Prepare a list of verified facts Enter the text to be checked Start the workflow The results are output as a structured summary Notes The workflow ignores small talk and focuses on verifiable factual statements Accuracy depends on the quality of the provided facts and the performance of the AI models Customization Options The summarization function can be adjusted or removed to return only the raw data of the issues found The AI models used can be exchanged if needed This workflow provides an efficient method for automated fact-checking and can be easily integrated into larger systems or editorial workflows.
gzockoll
Guido Zockoll
Ollama Chat Model node

About Ollama Chat Model

Related categories

Similar integrations

  • Embeddings Google Gemini node
  • Binary Input Loader node
  • Embeddings Cohere node
  • Hugging Face Inference Model node
  • OpenAI Chat Model node
  • HTTP Request Tool node
  • Pinecone: Insert node
  • AWS Bedrock Chat Model node

Over 3000 companies switch to n8n every single week

Connect Ollama Chat Model with your company’s tech stack and create automation workflows

in other news I installed @n8n_io tonight and holy moly it’s good

it’s compatible with EVERYTHING

Last week I automated much of the back office work for a small design studio in less than 8hrs and I am still mind-blown about it.

n8n is a game-changer and should be known by all SMBs and even enterprise companies.

We're using the @n8n_io cloud for our internal automation tasks since the beta started. It's awesome! Also, support is super fast and always helpful. πŸ€—