Skip to content

Commit e837a4e

Browse files
committed
Add AI Chat Bot feature for Kick livestreams
- Introduced a new AI-powered chat bot that interacts with Kick livestream chat. - Added `transformers` and `torch` as optional dependencies in requirements.txt. - Created CHAT_FEATURE.md to document the chat bot's features, installation, and usage. - Implemented chat_service.py to handle AI model loading and message processing. - Developed kick_chat_bot.py to manage WebSocket connections and chat interactions. - Added test_chat_bot.py to validate AI chat functionality and integration with Kick. - Included test_transformers_direct.py for direct testing of transformers library and model loading. - Established a basic structure for backend services in __init__.py.
1 parent 0ed1f7a commit e837a4e

File tree

8 files changed

+1473
-1
lines changed

8 files changed

+1473
-1
lines changed

CHAT_FEATURE.md

Lines changed: 229 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,229 @@
1+
# 🤖 AI Chat Bot for Kick Livestreams
2+
3+
## Overview
4+
5+
KickViewerBOT includes an AI-powered chat bot that can **interact directly with your Kick livestream chat**! The bot reads messages from viewers and responds automatically with AI-generated replies - all running locally on your machine.
6+
7+
## Features
8+
9+
-**Live Chat Integration** - Connects to Kick chat in real-time
10+
-**AI Responses** - Uses Microsoft DialoGPT for natural conversations
11+
-**100% Local** - Runs on your PC, no cloud services
12+
-**Smart Rate Limiting** - Won't spam chat (configurable response rate)
13+
-**Context Aware** - Remembers recent conversation
14+
-**Privacy First** - Your conversations never leave your computer
15+
16+
## How It Works
17+
18+
1. **Connects to Kick Chat** - Monitors your stream's chat via WebSocket
19+
2. **AI Processes Messages** - Lightweight model (117MB) generates responses
20+
3. **Responds Naturally** - Posts replies back to chat (optional)
21+
4. **Rate Limited** - Smart timing to avoid spam detection
22+
23+
## Installation
24+
25+
The chat bot requires additional Python packages. Install them with:
26+
27+
```bash
28+
pip install transformers torch
29+
```
30+
31+
**Note:** First-time setup will download the AI model (~117MB). This happens automatically on first use.
32+
33+
## Quick Start
34+
35+
### 1. Load the AI Model
36+
37+
The model loads automatically when you start the backend. Check logs for:
38+
39+
```
40+
🤖 Chat service initialized (model loading in background)
41+
✅ AI chat model loaded successfully!
42+
```
43+
44+
### 2. Start Chat Bot for Your Stream
45+
46+
**Via WebSocket:**
47+
48+
```javascript
49+
socket.emit("kick_chat_start", {
50+
channel_name: "your_channel_name",
51+
auth_token: "your_kick_auth_token", // Optional - needed to send messages
52+
response_chance: 0.2, // 20% chance to respond to each message
53+
min_interval: 5, // Minimum 5 seconds between responses
54+
});
55+
56+
socket.on("kick_chat_started", (data) => {
57+
console.log("Bot started:", data);
58+
});
59+
```
60+
61+
### 3. Monitor Activity
62+
63+
The bot will:
64+
65+
- 📖 Read all chat messages
66+
- 🤔 Decide randomly if it should respond (based on `response_chance`)
67+
- 🤖 Generate AI response with context
68+
- 💬 Send response to chat (if `auth_token` provided)
69+
70+
## Configuration Options
71+
72+
### Response Chance
73+
74+
- `0.1` = 10% (responds rarely, ~1 in 10 messages)
75+
- `0.2` = 20% (balanced, recommended)
76+
- `0.5` = 50% (very active, may seem spammy)
77+
78+
### Min Interval
79+
80+
- `5` seconds = Moderate activity
81+
- `10` seconds = Conservative
82+
- `3` seconds = Active (risk of rate limiting)
83+
84+
### Getting Your Auth Token
85+
86+
**To send messages, you need a Kick auth token:**
87+
88+
1. Open Kick.com in browser and login
89+
2. Open Developer Tools (F12)
90+
3. Go to Application/Storage → Cookies
91+
4. Find cookie named `kick_session` or similar
92+
5. Copy the value
93+
94+
**⚠️ Security Warning:** Never share your auth token publicly!
95+
96+
## How It Works
97+
98+
1. **Model Used:** Microsoft DialoGPT-small
99+
100+
- Size: 117MB
101+
- Type: Conversational AI
102+
- Quality: Good for casual chat
103+
104+
2. **Backend Service:** `backend/services/chat_service.py`
105+
106+
- Loads model asynchronously (doesn't block bot startup)
107+
- Maintains conversation history per session
108+
- Handles errors gracefully
109+
110+
3. **WebSocket API:**
111+
- `chat_status` - Check if chat is ready
112+
- `chat_message` - Send message and get response
113+
- `chat_clear` - Clear conversation history
114+
- `chat_history` - Get full conversation
115+
116+
## Usage
117+
118+
### Check Status
119+
120+
```javascript
121+
socket.emit("chat_status");
122+
socket.on("chat_status_response", (data) => {
123+
console.log(data);
124+
// { available: true, status: 'ready', message: 'AI chat is ready' }
125+
});
126+
```
127+
128+
### Send Message
129+
130+
```javascript
131+
socket.emit("chat_message", {
132+
message: "Hello! How are you?",
133+
session_id: "user123", // Optional, defaults to socket ID
134+
});
135+
136+
socket.on("chat_response", (data) => {
137+
console.log(data.response);
138+
// AI's response
139+
});
140+
```
141+
142+
### Clear History
143+
144+
```javascript
145+
socket.emit("chat_clear", {
146+
session_id: "user123",
147+
});
148+
```
149+
150+
## Performance
151+
152+
- **First Message:** ~2-5 seconds (model initialization)
153+
- **Subsequent Messages:** ~0.5-1 second
154+
- **Memory Usage:** ~500MB RAM when loaded
155+
- **CPU Usage:** Moderate during generation, idle otherwise
156+
157+
## Troubleshooting
158+
159+
### Model Won't Load
160+
161+
**Error:** `transformers library not installed`
162+
163+
**Solution:**
164+
165+
```bash
166+
pip install transformers torch
167+
```
168+
169+
### Slow Response
170+
171+
**Issue:** First response is slow
172+
173+
**Explanation:** Model loads on first use. Subsequent responses are faster.
174+
175+
### Out of Memory
176+
177+
**Issue:** Not enough RAM
178+
179+
**Solutions:**
180+
181+
- Close other applications
182+
- Restart the bot
183+
- Use a smaller model (edit `chat_service.py`)
184+
185+
## Alternative Models
186+
187+
You can use different models by editing `backend/services/chat_service.py`:
188+
189+
```python
190+
# Change this line:
191+
self.model_name = "microsoft/DialoGPT-small" # 117MB
192+
193+
# To one of these:
194+
# self.model_name = "microsoft/DialoGPT-medium" # 355MB - Better quality
195+
# self.model_name = "microsoft/DialoGPT-large" # 774MB - Best quality
196+
# self.model_name = "distilgpt2" # 82MB - Faster, simpler
197+
```
198+
199+
## Disabling Chat
200+
201+
If you don't want the chat feature:
202+
203+
1. Simply don't install `transformers` and `torch`
204+
2. The bot will run normally without chat
205+
3. Chat endpoints will return "not available" messages
206+
207+
## Security
208+
209+
- **Local Processing:** All AI processing happens on your machine
210+
- **No Data Collection:** Nothing is sent to external servers
211+
- **Session Isolation:** Each user has their own conversation context
212+
213+
## Future Improvements
214+
215+
Planned features:
216+
217+
- [ ] Custom system prompts
218+
- [ ] Model switching via UI
219+
- [ ] Response streaming for real-time typing effect
220+
- [ ] Multi-language support
221+
- [ ] Integration with Kick chat commands
222+
223+
## Need Help?
224+
225+
Open an issue on GitHub with:
226+
227+
- Python version: `python --version`
228+
- Torch version: `pip show torch`
229+
- Error logs from the terminal

0 commit comments

Comments
 (0)