This workflow builds a fully private, self-hosted AI chatbot using Meta Llama models. Unlike cloud-based AI APIs, every conversation stays on your infrastructure — no data leaves your environment. The chatbot remembers conversation history per session, routes different query types to specialized Llama prompts, logs all interactions, and can escalate unresolved queries to a human agent via Slack.
Powered by Ollama (local) or Groq/Together AI (cloud Llama endpoints) — configurable in one node.
To give businesses a production-grade private AI chatbot that:
Most businesses cannot send sensitive conversations to OpenAI or Anthropic due to:
Llama models run fully on-premise. This workflow gives those businesses the same quality AI chatbot experience with complete data sovereignty.
Monetization: sell this as a private AI chatbot deployment package to enterprises. Setup fee plus monthly hosting — recurring revenue.
Stage A — Message Intake
Webhook receives incoming chat message with session ID and user message text. Set node stores Llama endpoint config and normalizes the payload.
Stage B — Session Memory
Code node loads conversation history for the session from an in-memory store. Appends the new user message to build the full context window for Llama.
Stage C — Intent Router
IF node checks the message for keywords to classify intent: support issue, sales inquiry, general question, or escalation request. Routes to the matching Llama system prompt branch.
Stage D — Llama Inference
HTTP Request calls the Llama API (Ollama local, Groq, or Together AI). Sends full conversation history plus the matched system prompt. Returns the assistant reply.
Stage E — Response Handling
Code node parses the Llama output, updates the session memory, checks if escalation is needed, and formats the final response.
Stage F — Logging and Delivery
Google Sheets logs every turn. Slack fires only when escalation is flagged. Webhook responds with the chatbot reply and session metadata.
Option A (Local / Private):
Option B (Cloud Llama via Groq — fastest):
Option C (Together AI):
Steps for all options:
5. Open Set Llama Config node — fill in all values
6. Set SLACK_WEBHOOK_URL and GOOGLE_SHEET_ID
7. Activate and POST to /webhook/llama-chat
{
sessionId: user-abc-123,
message: My order arrived damaged and I need a refund,
userId: user_123,
botPersona: support,
userName: Sarah
}
Explore More Automation:
Contact us to design AI-powered lead nurturing, content engagement, and multi-platform reply workflows tailored to your growth strategy.