π€ Create a Telegram Bot with Mistral AI and Conversation Memory
A sophisticated Telegram bot that provides AI-powered responses with conversation memory. This template demonstrates how to integrate any AI API service with Telegram, making it easy to swap between different AI providers like OpenAI, Anthropic, Google AI, or any other API-based AI model.
π§ How it works
The workflow creates an intelligent Telegram bot that:
- π¬ Maintains conversation history for each user
- π§ Provides contextual AI responses using any AI API service
- π± Handles different message types and commands
- π Manages chat sessions with clear functionality
- π Easily adaptable to any AI provider (OpenAI, Anthropic, Google AI, etc.)
βοΈ Set up steps
π Prerequisites
- π€ Telegram Bot Token (from @BotFather)
- π AI API Key (from any AI service provider)
- π n8n instance with webhook capability
π οΈ Configuration Steps
-
π€ Create Telegram Bot
- Message @BotFather on Telegram
- Create new bot with
/newbot
command
- Save the bot token for credentials setup
-
π§ Choose Your AI Provider
- OpenAI: Get API key from OpenAI platform
- Anthropic: Sign up for Claude API access
- Google AI: Get Gemini API key
- NVIDIA: Access LLaMA models
- Hugging Face: Use inference API
- Any other AI API service
-
π Set up Credentials in n8n
- Add Telegram API credentials with your bot token
- Add Bearer Auth/API Key credentials for your chosen AI service
- Test both connections
-
π Deploy Workflow
- Import the workflow JSON
- Customize the AI API call (see customization section)
- Activate the workflow
- Set webhook URL in Telegram bot settings
β¨ Features
π Core Functionality
- π¨ Smart Message Routing: Automatically categorizes incoming messages (commands, text, non-text)
- π§ Conversation Memory: Maintains chat history for each user (last 10 messages)
- π€ AI-Powered Responses: Integrates with any AI API service for intelligent replies
- β‘ Command Support: Built-in
/start
and /clear
commands
π± Message Types Handled
- π¬ Text Messages: Processed through AI model with context
- π§ Commands: Special handling for bot commands
- β Non-text Messages: Polite error message for unsupported content
πΎ Memory Management
- π€ User-specific chat history storage
- π Automatic history trimming (keeps last 10 messages)
- π Global state management across workflow executions
π€ Bot Commands
/start
π― - Welcome message with bot introduction
/clear
ποΈ - Clears conversation history for fresh start
- Regular text π¬ - Processed by AI with conversation context
π§ Technical Details
ποΈ Workflow Structure
- π‘ Telegram Trigger - Receives all incoming messages
- π Message Filtering - Routes messages based on type/content
- πΎ History Management - Maintains conversation context
- π§ AI Processing - Generates intelligent responses
- π€ Response Delivery - Sends formatted replies back to user
π€ AI API Integration (Customizable)
Current Example (NVIDIA):
- Model:
mistralai/mistral-nemotron
- Temperature: 0.6 (balanced creativity)
- Max tokens: 4096
- Response limit: Under 200 words
π Easy to Replace with Any AI Service:
OpenAI Example:
{
"model": "gpt-4",
"messages": [...],
"temperature": 0.7,
"max_tokens": 1000
}
Anthropic Claude Example:
{
"model": "claude-3-sonnet-20240229",
"messages": [...],
"max_tokens": 1000
}
Google Gemini Example:
{
"contents": [...],
"generationConfig": {
"temperature": 0.7,
"maxOutputTokens": 1000
}
}
π‘οΈ Error Handling
- β Non-text message detection and appropriate responses
- π§ API failure handling
- β οΈ Invalid command processing
π¨ Customization Options
π€ AI Provider Switching
To use a different AI service, modify the "NVIDIA LLaMA Chat Model" node:
- π Change the URL in HTTP Request node
- π§ Update the request body format in "Prepare API Request" node
- π Update authentication method if needed
- π Adjust response parsing in "Save AI Response to History" node
π§ AI Behavior
- π Modify system prompt in "Prepare API Request" node
- π‘οΈ Adjust temperature and response parameters
- π Change response length limits
- π― Customize model-specific parameters
πΎ Memory Settings
- π Adjust history length (currently 10 messages)
- π€ Modify user identification logic
- ποΈ Customize data persistence approach
π Bot Personality
- π Update welcome message content
- β οΈ Customize error messages and responses
- β Add new command handlers
π‘ Use Cases
- π§ Customer Support: Automated first-line support with context awareness
- π Educational Assistant: Homework help and learning support
- π₯ Personal AI Companion: General conversation and assistance
- πΌ Business Assistant: FAQ handling and information retrieval
- π¬ AI API Testing: Perfect template for testing different AI services
- π Prototype Development: Quick AI chatbot prototyping
π Notes
- π Requires active n8n instance for webhook handling
- π° AI API usage may have rate limits and costs (varies by provider)
- πΎ Bot memory persists across workflow restarts
- π₯ Supports multiple concurrent users with separate histories
- π Template is provider-agnostic - easily switch between AI services
- π οΈ Perfect starting point for any AI-powered Telegram bot project
π§ Popular AI Services You Can Use
Provider |
Model Examples |
API Endpoint Style |
π’ OpenAI |
GPT-4, GPT-3.5 |
https://api.openai.com/v1/chat/completions |
π΅ Anthropic |
Claude 3 Opus, Sonnet |
https://api.anthropic.com/v1/messages |
π΄ Google |
Gemini Pro, Gemini Flash |
https://generativelanguage.googleapis.com/v1beta/models/ |
π‘ NVIDIA |
LLaMA, Mistral |
https://integrate.api.nvidia.com/v1/chat/completions |
π Hugging Face |
Various OSS models |
https://api-inference.huggingface.co/models/ |
π£ Cohere |
Command, Generate |
https://api.cohere.ai/v1/generate |
Simply replace the HTTP Request node configuration to switch providers!