Back to Templates
  • +5

Extract personal data with self-hosted LLM Mistral NeMo

Created by

Yulia

Last update

Last update 4 months ago

Share

This workflow shows how to use a self-hosted Large Language Model (LLM) with n8n's LangChain integration to extract personal information from user input. This is particularly useful for enterprise environments where data privacy is crucial, as it allows sensitive information to be processed locally.

📖 For a detailed explanation and more insights on using open-source LLMs with n8n, take a look at our comprehensive guide on open-source LLMs.

🔑 Key Features

  1. Local LLM

    • Connect Ollama to run Mistral NeMo LLM locally
    • Provide a foundation for compliant data processing, keeping sensitive information on-premises
  2. Data extraction

    • Convert unstructured text to a consistent JSON format
    • Adjust the JSON schema to meet your specific data extraction needs.
  3. Error handling

    • Implement auto-fixing for LLM outputs
    • Include error output for further processing

⚙️ Setup and сonfiguration

Prerequisites

Configuration steps

  1. Add the Basic LLM Chain node with system prompts.
  2. Set up the Ollama Chat Model with optimized parameters.
  3. Define the JSON schema in the Structured Output Parser node.

🔍 Further resources

Apply the power of self-hosted LLMs in your n8n workflows while maintaining control over your data processing pipeline!