Welcome to the Giant AI documentation! This directory contains comprehensive guides for understanding and using Giant AI's features.
New to Giant AI? Start here:
- Quick Start Commands Guide - ESSENTIAL: Where to run what commands when - eliminates confusion!
- Neovim Plugin - Official Neovim integration for semantic search and AI analysis
- LLM Connectivity Guide - How Giant AI connects to your AI provider (Claude, OpenAI, etc.)
- RAG + AI Analysis Workflow - Making RAG search results actionable with AI analysis
- RAG vs Grep Usage Guide - When semantic search provides value over traditional text search
- Quick Start Commands Guide - ESSENTIAL: Command reference and workflow guide
- LLM Connectivity Guide - Provider architecture, configuration, and model switching
- MCP Demystified - Model Context Protocol and enhanced AI interactions
- RAG Overview Explained - Semantic search system architecture and benefits
- Context Management Explained - Project-aware AI with intelligent context loading
- Agent Mode vs Manual Coding - When to use autonomous vs interactive AI
- RAG vs Grep Usage Guide - Semantic search vs traditional text search comparison
- RAG + AI Analysis Workflow - Making RAG search results actionable with AI analysis
- Prompt Templates Unveiled - Optimizing AI interactions with context-aware prompts
- LLM Provider Practical Guide - Claude vs OpenAI vs local models for different tasks
- Providers Guide - Complete guide to all supported providers (Claude Desktop, OpenAI, Anthropic, Gemini)
- Neovim Plugin - Official Neovim integration (external repository)
- AI Pattern Refactor - Semantic code refactoring across multiple files
- Complete Setup Guide - Comprehensive installation and configuration details
- Giant Agent Documentation - Autonomous coding mode with checkpoints and safety controls
"I want to understand how Giant AI works with my AI provider" → LLM Connectivity Guide
"When should I use semantic search vs grep?" → RAG vs Grep Usage Guide
"How do I set up autonomous coding?" → Agent Mode vs Manual Coding
"What's MCP and why should I care?" → MCP Demystified
"How does the semantic search work?" → RAG Overview Explained
"How do I choose between Claude and OpenAI?" → LLM Provider Practical Guide
"How can I refactor code across multiple files?" → AI Pattern Refactor
Beginner: Start with LLM Connectivity → RAG vs Grep → Quick Start Guide
Intermediate: Core Concepts → Feature Guides → Provider Selection
Advanced: Tools & Utilities → Setup & Configuration → Architecture deep-dives
- Code examples are provided throughout - try them in your own projects
- Architecture diagrams show how components work together
- Troubleshooting sections help resolve common issues
- Best practices are highlighted for optimal results
- Main README - Project overview and quick start
- Contributing Guide - How to contribute to Giant AI
- License - Apache 2.0 license details
Our documentation follows these principles:
- Practical examples over theoretical explanations
- Clear use cases for every feature
- Step-by-step guides with expected outputs
- Troubleshooting for common issues
- Architecture insights for understanding how things work
Found an issue or want to improve the docs?
- Check our Contributing Guide
- Open an issue with the "documentation" label
- Submit a pull request with improvements
Ready to dive in? Start with the LLM Connectivity Guide to understand how Giant AI enhances your existing AI workflow!