This project implements a Telegram bot service designed to generate sarcastic and caustic comments in response to chat messages. Leveraging Ollama for natural language processing, the bot aims to provide "spicy" remarks, adding a unique flavor to group conversations.
This setup is designed to be lightweight and has been successfully tested on low-spec hardware:
- Raspberry Pi 5 8GB: The entire stack, including the bot and the Ollama language model, runs efficiently. The model used for testing was i82blikeu/gemma-3n-E4B-it-GGUF:Q3_K_M.
To install or update the mduck service, run the following command in your terminal. The script will guide you through the process.
curl -fsSL https://raw.githubusercontent.com/aatrubilin/mduck/master/install.sh | bashNote for Windows users: This command should be run in WSL (Windows Subsystem for Linux).
The project follows the following structure:
src/mduck/containers/*.py: Dependency-injector containers.src/mduck/repositories/*.py: Data-access layer.src/mduck/routers/*.py: API layer with FastAPI routers.src/mduck/schemas/*.py: Pydantic schemas for data validation and serialization.src/mduck/services/*.py: Business logic.src/config/settings.py: Configuration using Pydantic-settings.
Follow these instructions to get a copy of the project up and running on your local machine for development and testing purposes.
This project uses Poetry for dependency management and packaging. Ensure you have it installed before proceeding.
Clone the repository and install the required dependencies using Poetry:
poetry installTo run the webhook application locally, use the following command:
poetry run run-webhook --host 0.0.0.0 --port 8000 --reload --log-level infoThe webhook application will be available at http://0.0.0.0:8000. The --reload flag enables hot-reloading for development.
Arguments:
| Argument | Description | Default |
|---|---|---|
--host |
Host address to bind to. | 0.0.0.0 |
--port |
Port to listen on. | 8000 |
--reload |
Enable auto-reloading. | False |
--log-level |
Log level. | info |
--log-format |
Log format. | json |
--log-file |
Log file path. | None |
--forwarded-allow-ips |
Comma-separated list of trusted proxy IPs. | 192.168.1.0/24,192.168.2.0/24 |
To run the pooling application locally, use the following command:
poetry run run-pooling --reload --log-level debugArguments:
| Argument | Description | Default |
|---|---|---|
--log-level |
Log level. | info |
--log-format |
Log format. | human |
--log-file |
Log file path. | None |
--reload |
Enable auto-reloading. | False |
To run tests and check coverage, use:
tox -e testTo lint the code using ruff and mypy, run:
tox -e lintThis entire project was developed using a "vibecoding" approach with the assistance of Gemini, emphasizing rapid prototyping and iterative development.
