-
Notifications
You must be signed in to change notification settings - Fork 2
Expand file tree
/
Copy pathexample.env
More file actions
56 lines (47 loc) · 2 KB
/
example.env
File metadata and controls
56 lines (47 loc) · 2 KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
# Environment Variables Template
# Copy this file to .env and replace the placeholder values
# Hugging Face Token (required for PyAnnote speaker diarization models)
# Get token from: https://huggingface.co/settings/tokens
# Accept model licenses at:
# - https://huggingface.co/pyannote/speaker-diarization-3.1
# - https://huggingface.co/pyannote/segmentation-3.0
HF_TOKEN=your_huggingface_token_here
# Whisper Model Configuration (optional)
# Default model for transcription. Options: tiny, base, small, medium, large, turbo, large-v3
# 'turbo' is recommended for best balance of speed and accuracy
# WHISPER_MODEL_NAME=turbo
# Compute Type for faster-whisper (optional)
# Precision for inference. Options: float16 (default), int8, int8_float16
# float16: Best quality, requires GPU (recommended for CUDA)
# int8: Lower memory usage, faster on CPU
# int8_float16: Hybrid mode
# COMPUTE_TYPE=float16
# LLM API Configuration (required for AI summarization)
# NOTE: The application automatically appends '/v1/chat/completions' to the URL
# Must use OpenAI-compatible API endpoints
# Example base URLs (do NOT include /v1/chat/completions):
# - OpenAI: https://api.openai.com
# - Ollama (v0.1.14+): http://localhost:11434
# - vLLM: http://localhost:8000
# - LM Studio: http://localhost:1234
LLM_API_URL=http://localhost:1234
LLM_MODEL_NAME=qwen2.5-14b-instruct
LLM_API_KEY=
# Database Configuration
# Generate secure password with: openssl rand -base64 32
POSTGRES_PASSWORD=changeme
# Timezone Configuration (optional)
# Timezone offset in hours for export timestamps
# Examples: +8 for Singapore/GMT+8, -5 for EST, 0 for UTC
TIMEZONE_OFFSET=+8
# GPU Configuration (optional)
# Control which NVIDIA GPUs are visible to the container
# Options: 'all', '0', '0,1', etc.
NVIDIA_VISIBLE_DEVICES=all
# Demo Mode (optional)
# Set to 'true' to skip backend health check (for static deployments like GitHub Pages)
VITE_DEMO_MODE=false
# Port Configuration (optional)
# External ports for nginx reverse proxy
# HTTP_PORT=80
# HTTPS_PORT=443