Back to Templates

๐Ÿ”๐Ÿฆ™๐Ÿค– Private & local Ollama self-hosted AI assistant

Created by

Created by: Joseph LePage || joe
Joseph LePage

Last update

Last update 3 months ago

Share


Transform your local N8N instance into a powerful chat interface using any local & private Ollama model, with zero cloud dependencies โ˜๏ธ. This workflow creates a structured chat experience that processes messages locally through a language model chain and returns formatted responses ๐Ÿ’ฌ.

How it works ๐Ÿ”„

  • ๐Ÿ’ญ Chat messages trigger the workflow
  • ๐Ÿง  Messages are processed through Llama 3.2 via Ollama (or any other Ollama compatible model)
  • ๐Ÿ“Š Responses are formatted as structured JSON
  • โšก Error handling ensures robust operation

Set up steps ๐Ÿ› ๏ธ

  • ๐Ÿ“ฅ Install N8N and Ollama
  • โš™๏ธ Download Ollama 3.2 model (or other model)
  • ๐Ÿ”‘ Configure Ollama API credentials
  • โœจ Import and activate workflow

This template provides a foundation for building AI-powered chat applications while maintaining full control over your data and infrastructure ๐Ÿš€.