-
Notifications
You must be signed in to change notification settings - Fork 46
Expand file tree
/
Copy path.env.sample
More file actions
45 lines (37 loc) · 2.22 KB
/
.env.sample
File metadata and controls
45 lines (37 loc) · 2.22 KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
# Push Website Environment Variables
# Copy this file to .env and fill in your actual values
# =============================================================================
# AI PROVIDER CONFIGURATION (Translation Automation)
# =============================================================================
# AI Provider Selection: 'windsurf' or 'local'
# - windsurf: Uses Windsurf/Anthropic Claude API (cloud-based, paid)
# - local: Uses local AI via OpenWebUI/Ollama (self-hosted, free)
AI_PROVIDER=windsurf
# Windsurf/Anthropic Configuration (when AI_PROVIDER=windsurf)
# Get your API key from your Windsurf dashboard
WINDSURF_API_KEY=your_windsurf_api_key_here
# provide the AI model for cloud (claude-sonnet-4-20250514, etc)
CLOUD_AI_MODEL=claude-sonnet-4-20250514
# Local AI Configuration (when AI_PROVIDER=local)
# OpenWebUI/Ollama endpoint (e.g., http://localhost:11434 for Ollama)
LOCAL_AI_BASE_URL=http://192.168.1.187:11434/
# Model name for local AI (e.g., llama3.1, mistral, codellama, gemma3:27b-it-qat), use | to alternate model if one fails
LOCAL_AI_MODEL=gemma3:27b-it-qat
# Optional API key for local AI (if required by your setup)
LOCAL_AI_API_KEY=na
# API Timeout Configuration (in milliseconds)
AI_REQUEST_TIMEOUT=60000 # 60 seconds default timeout for AI requests, for local keep it more 200 seconds or so
# Token and Chunk Configuration
# Maximum input tokens for AI requests (adjust based on your AI provider)
AI_MAX_INPUT_TOKENS=100000 # Conservative limit for Claude/large models, for local go higher, gemma is 128000
# Maximum tokens per chunk for translation (smaller = more reliable, larger = fewer API calls)
AI_MAX_CHUNK_TOKENS=2000 # Balanced limit for individual chunks, for local go higher, eg 5000
# Average characters per token estimation (used for token counting)
AI_CHARS_PER_TOKEN=3.5 # Conservative estimate for JSON content
# Rate limiting - maximum API calls per minute (5 for Anthropic, 500 for local)
AI_RATE_LIMIT_PER_MINUTE=5 # Prevents hitting API rate limits
# =============================================================================
# DEPLOYMENT CONFIGURATION
# =============================================================================
# GitHub deployment settings
REACT_APP_PUBLIC_URL=https://push.org