Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
44 commits
Select commit Hold shift + click to select a range
cf44a8f
fix: Enable binary file imports for folder and git clone operations
Stijnus Aug 30, 2025
99de02d
Add sidebar store implementation
Stijnus Aug 30, 2025
1148a37
Fix linting issues in GitUrlImport and folderImport
Stijnus Aug 30, 2025
3dcc3ce
Integrate sidebar state with global store
Stijnus Aug 30, 2025
1aa61fa
feat: enhance integration tabs with advanced analytics and performanc…
Stijnus Aug 30, 2025
ed5a3ba
Refactor LLM token limit logic and add capability service
Stijnus Sep 1, 2025
c409ba0
Update token limits for GPT-4o and o1 models
Stijnus Sep 1, 2025
144cd0a
Add smart prompt selection and free model guidance
Stijnus Sep 1, 2025
9bf8370
Add Kimi model prompt handling and improve chat API
Stijnus Sep 2, 2025
4fb1716
Replace Phosphor icons with Lucide icons in settings
Stijnus Sep 2, 2025
4655fb4
Add global icon button styling and refactor usage
Stijnus Sep 2, 2025
185acb0
Update HuggingFace model token limits and context sizes
Stijnus Sep 2, 2025
6bd127a
Refactor button styles and simplify free model UI
Stijnus Sep 2, 2025
21790b8
Remove Service Status tab and related code
Stijnus Sep 3, 2025
ea39d42
Merge BoltDYI_NEW_FEATURE_TAB branch and update icons to Lucide
Stijnus Sep 3, 2025
31cbb01
Remove service-status tab and related references
Stijnus Sep 3, 2025
9c0d199
Refactor settings UI to use theme tokens and add notifications
Stijnus Sep 3, 2025
c8d7350
feat: comprehensive UI and component updates
Stijnus Sep 3, 2025
bb3b6c0
feat: merge BOLTDYI_BETA_FEAT - add notifications, theme tokens, and …
Stijnus Sep 3, 2025
940dc42
feat: comprehensive UI and service updates
Stijnus Sep 4, 2025
8fa5e0c
style: code formatting and cleanup
Stijnus Sep 4, 2025
c3e8646
feat: add custom prompt creation dialog and enhance select component
Stijnus Sep 4, 2025
741c178
refactor: remove legacy connection components and test files
Stijnus Sep 4, 2025
8dfcf13
feat: comprehensive GitHub integration system refactor
Stijnus Sep 4, 2025
61322c1
feat: enhance UI components, services, and integrations
Stijnus Sep 4, 2025
89357bf
Merge origin/main into BOLTDIY_BETA_MERGE - Update with mainstream ch…
Stijnus Sep 5, 2025
7754542
Fix TypeScript errors from merge conflict resolution
Stijnus Sep 5, 2025
6ea9480
Merge origin/main into BOLTDIY_BETA_MERGE - Update with latest main b…
Stijnus Sep 5, 2025
991ab7e
fix: resolve TypeScript errors in core settings and GitHub types
Stijnus Sep 5, 2025
5fbe80d
fix: resolve all TypeScript errors in GitHub integration
Stijnus Sep 5, 2025
d4dd97f
fix: resolve TypeScript union type errors in MCP integration
Stijnus Sep 5, 2025
5e0e328
feat: add GitLab integration and deployment enhancements
Stijnus Sep 5, 2025
a69bc44
refactor: improve code quality and update dependencies
Stijnus Sep 5, 2025
441f667
fix: resolve lint issue in githubApiService parameter naming
Stijnus Sep 5, 2025
405e05d
fix: resolve all TypeScript and ESLint errors for stable beta branch
Stijnus Sep 5, 2025
4fb0d19
Refactor connection flows to use server-side tokens
Stijnus Sep 5, 2025
082252c
Delete CHANGES.md
Stijnus Sep 5, 2025
1fdb6f2
Enhance Supabase project stats with database metrics
Stijnus Sep 5, 2025
ca4ef27
Add support for server-side Supabase token from env
Stijnus Sep 5, 2025
16cfa13
Add branch selection dialog for GitHub repo cloning
Stijnus Sep 5, 2025
52fc0cd
Resolve merge conflicts - keep working local model implementation
Stijnus Sep 5, 2025
f9cfe98
Fix TypeScript imports and add missing functions
Stijnus Sep 5, 2025
3cd141e
Restore working code from stash - resolve all merge conflicts
Stijnus Sep 5, 2025
1e0e85f
Restore missing dashboard tabs from BOLTDIY_BETA_MERGE
Stijnus Sep 5, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
238 changes: 153 additions & 85 deletions .env.production
Original file line number Diff line number Diff line change
@@ -1,115 +1,183 @@
# Rename this file to .env once you have filled in the below environment variables!
# =============================================================================
# BOLT.DIY PRODUCTION ENVIRONMENT CONFIGURATION
# =============================================================================
# Rename this file to .env once you have filled in the required values
# This file should contain production-ready API keys and configuration

# Get your GROQ API Key here -
# https://console.groq.com/keys
# You only need this environment variable set if you want to use Groq models
GROQ_API_KEY=
# =============================================================================
# APPLICATION SETTINGS
# =============================================================================

# Get your HuggingFace API Key here -
# https://huggingface.co/settings/tokens
# You only need this environment variable set if you want to use HuggingFace models
HuggingFace_API_KEY=
# Environment mode (must be 'production' for production deployment)
NODE_ENV=production

# Get your Open AI API Key by following these instructions -
# https://help.openai.com/en/articles/4936850-where-do-i-find-my-openai-api-key
# You only need this environment variable set if you want to use GPT models
OPENAI_API_KEY=
# Application port (defaults to 5173 for development, 3000 for production)
PORT=3000

# Logging level (warn, error for production)
VITE_LOG_LEVEL=warn

# Default context window size for local models
DEFAULT_NUM_CTX=32768

# Get your Anthropic API Key in your account settings -
# https://console.anthropic.com/settings/keys
# You only need this environment variable set if you want to use Claude models
# =============================================================================
# MAJOR AI PROVIDER API KEYS
# =============================================================================

# Anthropic Claude - Primary provider
# Get your API key from: https://console.anthropic.com/
ANTHROPIC_API_KEY=

# Get your OpenRouter API Key in your account settings -
# https://openrouter.ai/settings/keys
# You only need this environment variable set if you want to use OpenRouter models
OPEN_ROUTER_API_KEY=
# OpenAI GPT models - Primary provider
# Get your API key from: https://platform.openai.com/api-keys
OPENAI_API_KEY=

# Get your Google Generative AI API Key by following these instructions -
# https://console.cloud.google.com/apis/credentials
# You only need this environment variable set if you want to use Google Generative AI models
# Google Gemini - Primary provider
# Get your API key from: https://makersuite.google.com/app/apikey
GOOGLE_GENERATIVE_AI_API_KEY=

# You only need this environment variable set if you want to use oLLAMA models
# DONT USE http://localhost:11434 due to IPV6 issues
# USE EXAMPLE http://127.0.0.1:11434
OLLAMA_API_BASE_URL=
# =============================================================================
# SPECIALIZED AI PROVIDERS
# =============================================================================

# You only need this environment variable set if you want to use OpenAI Like models
OPENAI_LIKE_API_BASE_URL=
# Groq (Fast inference models)
# Get your API key from: https://console.groq.com/keys
GROQ_API_KEY=

# Together AI (Fine-tuned models)
# Get your API key from: https://api.together.xyz/settings/api-keys
TOGETHER_API_KEY=

# OpenRouter (Multi-provider routing)
# Get your API key from: https://openrouter.ai/keys
OPEN_ROUTER_API_KEY=

# You only need this environment variable set if you want to use Together AI models
TOGETHER_API_BASE_URL=
# =============================================================================
# INTERNATIONAL & SPECIALIZED PROVIDERS
# =============================================================================

# You only need this environment variable set if you want to use DeepSeek models through their API
# DeepSeek (Chinese models)
# Get your API key from: https://platform.deepseek.com/api_keys
DEEPSEEK_API_KEY=

# Get your OpenAI Like API Key
OPENAI_LIKE_API_KEY=
# Moonshot AI / Kimi (Chinese models)
# Get your API key from: https://platform.moonshot.ai/console/api-keys
MOONSHOT_API_KEY=

# Get your Together API Key
TOGETHER_API_KEY=
# X.AI (Elon Musk's company)
# Get your API key from: https://console.x.ai/
XAI_API_KEY=

# You only need this environment variable set if you want to use Hyperbolic models
HYPERBOLIC_API_KEY=
HYPERBOLIC_API_BASE_URL=
# =============================================================================
# EUROPEAN & ADDITIONAL PROVIDERS
# =============================================================================

# Get your Mistral API Key by following these instructions -
# https://console.mistral.ai/api-keys/
# You only need this environment variable set if you want to use Mistral models
# Mistral (European models)
# Get your API key from: https://console.mistral.ai/api-keys/
MISTRAL_API_KEY=

# Get the Cohere Api key by following these instructions -
# https://dashboard.cohere.com/api-keys
# You only need this environment variable set if you want to use Cohere models
# Cohere (Canadian models)
# Get your API key from: https://dashboard.cohere.ai/api-keys
COHERE_API_KEY=

# Get LMStudio Base URL from LM Studio Developer Console
# Make sure to enable CORS
# DONT USE http://localhost:1234 due to IPV6 issues
# Example: http://127.0.0.1:1234
# Perplexity AI (Search-augmented models)
# Get your API key from: https://www.perplexity.ai/settings/api
PERPLEXITY_API_KEY=

# =============================================================================
# COMMUNITY & OPEN SOURCE PROVIDERS
# =============================================================================

# Hugging Face (Open source models)
# Get your API key from: https://huggingface.co/settings/tokens
HuggingFace_API_KEY=

# Hyperbolic (High-performance inference)
# Get your API key from: https://app.hyperbolic.xyz/settings
HYPERBOLIC_API_KEY=

# GitHub Models (GitHub-hosted OpenAI models)
# Get your Personal Access Token from: https://github.com/settings/tokens
GITHUB_API_KEY=

# =============================================================================
# LOCAL MODEL PROVIDERS
# =============================================================================

# Ollama (Local model server)
# DON'T USE http://localhost:11434 due to IPv6 issues
# USE: http://127.0.0.1:11434
OLLAMA_API_BASE_URL=

# LMStudio (Local model interface)
# Make sure to enable CORS in LMStudio
LMSTUDIO_API_BASE_URL=

# Get your xAI API key
# https://x.ai/api
# You only need this environment variable set if you want to use xAI models
XAI_API_KEY=
# =============================================================================
# COMPATIBLE API PROVIDERS
# =============================================================================

# Get your Perplexity API Key here -
# https://www.perplexity.ai/settings/api
# You only need this environment variable set if you want to use Perplexity models
PERPLEXITY_API_KEY=
# OpenAI-compatible API (Any provider using OpenAI format)
OPENAI_LIKE_API_BASE_URL=
OPENAI_LIKE_API_KEY=

# Get your AWS configuration
# https://console.aws.amazon.com/iam/home
AWS_BEDROCK_CONFIG=
# =============================================================================
# CLOUD INFRASTRUCTURE PROVIDERS
# =============================================================================

# Include this environment variable if you want more logging for debugging locally
VITE_LOG_LEVEL=

# Get your GitHub Personal Access Token here -
# https://github.com/settings/tokens
# This token is used for:
# 1. Importing/cloning GitHub repositories without rate limiting
# 2. Accessing private repositories
# 3. Automatic GitHub authentication (no need to manually connect in the UI)
#
# For classic tokens, ensure it has these scopes: repo, read:org, read:user
# For fine-grained tokens, ensure it has Repository and Organization access
VITE_GITHUB_ACCESS_TOKEN=
# AWS Bedrock Configuration (JSON format)
# Get your credentials from: https://console.aws.amazon.com/iam/home
# Example: {"region": "us-east-1", "accessKeyId": "yourAccessKeyId", "secretAccessKey": "yourSecretAccessKey"}
AWS_BEDROCK_CONFIG=

# Specify the type of GitHub token you're using
# Can be 'classic' or 'fine-grained'
# Classic tokens are recommended for broader access
VITE_GITHUB_TOKEN_TYPE=
# =============================================================================
# THIRD-PARTY INTEGRATIONS
# =============================================================================

# Netlify Authentication
# GitHub Integration
# Personal Access Token for repository access
VITE_GITHUB_ACCESS_TOKEN=
VITE_GITHUB_TOKEN_TYPE=classic

# Supabase Integration
# Database URL and API keys for Supabase projects
# IMPORTANT: Use production-ready API keys, not development keys
# - Project URL: Your production Supabase project URL
# - Anon Key: Production anon/public key (safe for client-side)
# - Access Token: Production service role key (keep secure, server-side only)
VITE_SUPABASE_URL=
VITE_SUPABASE_ANON_KEY=
VITE_SUPABASE_ACCESS_TOKEN=

# Vercel Integration
# Access token for Vercel deployments and project management
# IMPORTANT: Use production token with appropriate permissions
VITE_VERCEL_ACCESS_TOKEN=

# Netlify Deployment
VITE_NETLIFY_ACCESS_TOKEN=

# Example Context Values for qwen2.5-coder:32b
#
# DEFAULT_NUM_CTX=32768 # Consumes 36GB of VRAM
# DEFAULT_NUM_CTX=24576 # Consumes 32GB of VRAM
# DEFAULT_NUM_CTX=12288 # Consumes 26GB of VRAM
# DEFAULT_NUM_CTX=6144 # Consumes 24GB of VRAM
DEFAULT_NUM_CTX=
# =============================================================================
# PRODUCTION CONTEXT WINDOW EXAMPLES
# =============================================================================
# Example values for different model configurations:
#
# qwen2.5-coder:32b context window sizes:
# DEFAULT_NUM_CTX=32768 # Consumes ~36GB VRAM
# DEFAULT_NUM_CTX=24576 # Consumes ~32GB VRAM
# DEFAULT_NUM_CTX=12288 # Consumes ~26GB VRAM
# DEFAULT_NUM_CTX=6144 # Consumes ~24GB VRAM

# =============================================================================
# SETUP INSTRUCTIONS
# =============================================================================
# 1. Fill in the API keys for the providers you want to use in production
# 2. Rename this file to .env: mv .env.production .env
# 3. Verify all required keys are set before deployment
# 4. Test the application thoroughly in a staging environment first
#
# SECURITY NOTES:
# - Never commit production API keys to version control
# - Use environment-specific keys for production
# - Rotate keys regularly for security
# - Monitor usage and costs for all providers
92 changes: 0 additions & 92 deletions CHANGES.md

This file was deleted.

28 changes: 28 additions & 0 deletions FAQ.md
Original file line number Diff line number Diff line change
Expand Up @@ -102,4 +102,32 @@ You will need to make sure you have the latest version of Visual Studio C++ inst

---

<details>
<summary><strong>What about free models and OpenRouter limitations?</strong></summary>

Free models (especially on OpenRouter) can be a great starting point but have some limitations:

**Common Issues:**
- Rate limiting and usage restrictions
- Slower response times during peak hours
- Less consistent response quality
- Higher chance of service interruptions

**Best Practices:**
- Use free models for simple tasks like code review or basic questions
- Switch to paid models (Claude 3.5 Sonnet, GPT-4o) for complex development
- The app shows visual warnings when using free models
- Check our [Free Models Guide](../docs/free-models-guide.md) for detailed recommendations

**Recommended Paid Alternatives:**
- Claude 3.5 Sonnet (best overall performance)
- GPT-4o (excellent for code generation)
- DeepSeek Coder V2 (cost-effective, high quality)
- Gemini 2.0 Flash (fast and reliable)

The app includes smart recommendations and visual indicators to help you choose the right model for your needs.
</details>

---

Got more questions? Feel free to reach out or open an issue in our GitHub repo!
Loading
Loading