Skip to content

A modular Telegram bot using AI for business replies

License

Notifications You must be signed in to change notification settings

potatoenergy/telegram-ai-replier

Repository files navigation

telegram-ai-replier

A modular Telegram bot using AI for business replies.

Compliance Statement

⚠️ Ethical Usage Requirements

  • Ensure your Telegram Business account complies with Telegram's Terms of Service.
  • Use responsibly and avoid spamming.
  • Be aware of the usage costs associated with your chosen AI provider (OpenAI, Ollama, custom).

Prerequisites

  • A publicly accessible server or hosting service capable of receiving HTTPS webhooks from Telegram.
  • A domain name pointing to your server (recommended for HTTPS).
  • A Telegram Bot Token (get it from @BotFather).
  • An AI Provider configured (OpenAI API Key, Ollama instance, or custom OpenAI-compatible API).

Environment Variables

.env configuration example:

# --- Telegram ---
# Your Telegram Bot Token
BOT_TOKEN=YOUR_BOT_TOKEN_HERE
# Your Telegram User ID (for admin commands)
ADMIN_USER_ID=YOUR_TELEGRAM_USER_ID_HERE
# The public URL where Telegram will send webhook updates (e.g., https://your-domain.com/)
TELEGRAM_WEBHOOK_URL=https://your-domain.com/

# --- AI Configuration ---
# AI Provider to use: 'openai' (official or custom) or 'ollama'
AI_PROVIDER=openai

# --- OpenAI Settings (required if AI_PROVIDER=openai) ---
# Your OpenAI API Key (required for official API)
OPENAI_API_KEY=sk-... # Your OpenAI API Key
# The OpenAI model to use (e.g., gpt-3.5-turbo, gpt-4o)
OPENAI_MODEL=gpt-3.5-turbo
# (Optional) Base URL for a custom OpenAI-compatible API (e.g., proxy, local service)
# If this is set, requests will go to this URL instead of api.openai.com
# OPENAI_BASE_URL=https://your-openai-proxy.com/v1

# --- Ollama Settings (required if AI_PROVIDER=ollama) ---
# URL of your Ollama service (default: http://host.docker.internal:11434)
# Adjust if Ollama runs elsewhere (e.g., a different container or host)
OLLAMA_URL=http://host.docker.internal:11434
# The Ollama model to use (e.g., llama3.2, mistral)
OLLAMA_MODEL=llama3.2

# --- AI General Settings (apply to all providers) ---
# Maximum tokens for the AI response
AI_MAX_TOKENS=500
# Creativity setting for the AI (0.0 to 2.0)
AI_TEMPERATURE=0.7
# System prompt to define the AI's behavior
AI_SYSTEM_PROMPT=You are a helpful assistant for a Telegram Business account. Answer questions politely and concisely.

# --- Rate Limiting ---
# Time window in seconds for rate limiting (e.g., 60 seconds)
RATE_LIMIT_WINDOW=60
# Max requests per window per chat_id
RATE_LIMIT_MAX_REQUESTS=5
Variable Purpose Default
BOT_TOKEN Telegram Bot Token -
ADMIN_USER_ID Telegram User ID for admin commands -
TELEGRAM_WEBHOOK_URL Webhook URL for Telegram -
AI_PROVIDER AI provider to use (openai, ollama) -
OPENAI_API_KEY API Key for OpenAI - (required if AI_PROVIDER=openai)
OPENAI_MODEL Model name for OpenAI gpt-3.5-turbo
OPENAI_BASE_URL Base URL for custom OpenAI API - (optional)
OLLAMA_URL URL for Ollama API http://host.docker.internal:11434
OLLAMA_MODEL Model name for Ollama llama3.2
AI_MAX_TOKENS Max tokens for AI response 500
AI_TEMPERATURE Creativity setting for AI 0.7
AI_SYSTEM_PROMPT System prompt for AI "You are a helpful assistant..."
RATE_LIMIT_WINDOW Rate limit time window (seconds) 60
RATE_LIMIT_MAX_REQUESTS Max requests per window per chat 5

Key Features

1. Business Message Handling:

  • Automatically receives and processes messages sent to your Telegram Business account via webhooks.
  • Replies generated by selected AI provider (OpenAI, Ollama, custom).

2. Admin Commands:

  • /start: Provides status information to the configured admin user.

3. Rate Limiting:

  • Prevents spam by limiting the number of requests per chat within a specified time window.

4. Modular Design:

  • Core, AI, and Config modules separated for clarity and maintainability.
  • Supports multiple AI providers via a common interface.

5. Webhook Management:

  • Automatically sets the webhook on startup based on TELEGRAM_WEBHOOK_URL.
  • Provides a status page at the webhook URL to confirm configuration.

Setup

  1. Configure Environment: Copy .env.example to .env and fill in your specific values (Bot Token, Admin ID, Webhook URL, AI Provider details).
  2. Deploy: Use the provided docker-compose.yml to deploy the bot on your server. Ensure your server is configured to receive webhooks on the specified TELEGRAM_WEBHOOK_URL.
  3. Link Business Account: In your Telegram Business profile, link the bot by navigating to "Chatbot" and entering your bot's username.
  4. Test: Send a message to your Telegram Business account. The bot should respond via AI.

Technical Architecture

Processing Pipeline:

  1. Telegram sends webhook POST request to TELEGRAM_WEBHOOK_URL.
  2. bot.php receives and routes the request.
  3. WebhookHandler identifies message type (admin/business).
  4. (For business) Rate limit check is performed.
  5. Message text is sent to the configured AIProvider.
  6. AI response is received.
  7. Bot class sends the response back to Telegram Business chat.

Safety Systems:

  • Rate limiting based on chat_id.
  • Input validation and error handling.
  • Configurable system prompt for AI.

License

MIT License

About

A modular Telegram bot using AI for business replies

Topics

Resources

License

Stars

Watchers

Forks

Packages

No packages published