Production-grade starter kit for autonomous AI agents on Google Antigravity.
Language: English | δΈζοΌδ»εΊδΈ»ι‘΅οΌ | δΈζζζ‘£ | EspaΓ±ol
In a world full of AI IDEs, I want enterprise-grade architecture to be as simple as Clone β Rename β Prompt.
This project leverages IDE context awareness (via .cursorrules and .antigravity/rules.md) to pre-embed a complete cognitive architecture in the repo.
When you open this project, your IDE stops being just an editorβit becomes an industry-savvy architect.
First principles:
- Minimize repetition: the repo should encode defaults so setup is nearly zero.
- Make intent explicit: capture architecture, context, and workflows in files, not tribal knowledge.
- Treat the IDE as a teammate: contextual rules turn the editor into a proactive architect, not a passive tool.
While building with Google Antigravity or Cursor, I found a pain point:
The IDE and models are powerful, but the empty project is too weak.
Every new project repeats the same boring setup:
- "Should my code live in
srcorapp?" - "How do I define utilities so Gemini recognizes them?"
- "How do I help the AI remember prior context?"
This repetition wastes creative energy. My ideal workflow is: after a git clone, the IDE already knows what to do.
So I built this project: Antigravity Workspace Template.
Linux / macOS:
# 1. Clone the template
git clone https://github.com/study8677/antigravity-workspace-template.git my-project
cd my-project
# 2. Run the installer
chmod +x install.sh
./install.sh
# 3. Configure your API keys
nano .env
# 4. Run the agent
source venv/bin/activate
python src/agent.pyWindows:
# 1. Clone the template
git clone https://github.com/study8677/antigravity-workspace-template.git my-project
cd my-project
# 2. Run the installer
install.bat
# 3. Configure your API keys (notepad .env)
# 4. Run the agent
python src/agent.py# 1. Clone the template
git clone https://github.com/study8677/antigravity-workspace-template.git my-project
cd my-project
# 2. Create virtual environment
python3 -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
# 3. Install dependencies
pip install -r requirements.txt
# 4. Configure your API keys
cp .env.example .env # (if available) or create .env manually
nano .env
# 5. Run the agent
python src/agent.pyThat's it! The IDE auto-loads configuration via .cursorrules + .antigravity/rules.md. You're ready to prompt.
This is not another LangChain wrapper. It's a minimal, transparent workspace for building AI agents that:
- π§ Have infinite memory (recursive summarization)
- π οΈ Auto-discover tools from
src/tools/ - π Auto-inject context from
.context/ - π Connect to MCP servers seamlessly
- π€ Coordinate multiple specialist agents
- π¦ Save outputs as artifacts (plans, logs, evidence)
Clone β Rename β Prompt. That's the workflow.
| Feature | Description |
|---|---|
| π§ Infinite Memory | Recursive summarization compresses context automatically |
| π οΈ Universal Tools | Drop Python functions in src/tools/ β auto-discovered |
| π Auto Context | Add files to .context/ β auto-injected into prompts |
| π MCP Support | Connect GitHub, databases, filesystems, custom servers |
| π€ Swarm Agents | Multi-agent orchestration with Router-Worker pattern |
| β‘ Gemini Native | Optimized for Gemini 2.0 Flash |
| π LLM Agnostic | Use OpenAI, Azure, Ollama, or any OpenAI-compatible API |
| π Artifact-First | Every task produces plans, logs, and evidence |
Full documentation available in /docs/en/:
- Quick Start β Installation & deployment
- Philosophy β Core concepts & architecture
- Zero-Config β Auto tool & context loading
- MCP Integration β External tool connectivity
- Swarm Protocol β Multi-agent coordination
- Roadmap β Future phases & vision
src/
βββ agent.py # Main agent loop
βββ memory.py # JSON memory manager
βββ mcp_client.py # MCP integration
βββ swarm.py # Multi-agent orchestration
βββ agents/ # Specialist agents
βββ tools/ # Your custom tools
.context/ # Knowledge base (auto-injected)
.antigravity/ # Antigravity rules
artifacts/ # Outputs & evidence
# src/tools/my_tool.py
def analyze_sentiment(text: str) -> str:
"""Analyzes the sentiment of given text."""
return "positive" if len(text) > 10 else "neutral"Restart agent. Done! The tool is now available.
Connect to external tools:
{
"servers": [
{
"name": "github",
"transport": "stdio",
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-github"],
"enabled": true
}
]
}Agent automatically discovers and uses all MCP tools.
Decompose complex tasks:
from src.swarm import SwarmOrchestrator
swarm = SwarmOrchestrator()
result = swarm.execute("Build and review a calculator")The swarm automatically:
- π€ Routes to Coder, Reviewer, Researcher agents
- π§© Synthesizes results
- π Saves artifacts
- β Phase 1-7: Foundation, DevOps, Memory, Tools, Swarm, Discovery
- β Phase 8: MCP Integration (fully implemented)
- π Phase 9: Enterprise Core (in progress)
- Added local OpenAI-compatible backend support (e.g., Ollama) when no Google API key is provided.
- Fixed
.envloading so runs from thesrc/folder still read the project-root config. - Default
.envnow points to local backend placeholders instead of a hardcoded Google key. - CLI entrypoints (
agent.pyandsrc/agent.py) now accept tasks via arguments orAGENT_TASK, instead of a fixed demo task.
See Roadmap for details.
Ideas are contributions too! Open an issue to:
- Report bugs
- Suggest features
- Propose architecture (Phase 9)
Or submit a PR to improve docs or code.
- @devalexanderdaza β First contributor. Implemented demo tools, enhanced agent functionality, proposed the "Agent OS" roadmap, and completed MCP integration.
- @Subham-KRLX β Added dynamic tools and context loading (Fixes #4) and the multi-agent cluster protocol (Fixes #6).
MIT License. See LICENSE for details.