Production-grade starter kit for autonomous AI agents on Google Antigravity.
Language: English | δΈζοΌδ»εΊδΈ»ι‘΅οΌ | δΈζζζ‘£ | EspaΓ±ol
In a world full of AI IDEs, I want enterprise-grade architecture to be as simple as Clone β Rename β Prompt.
This project leverages IDE context awareness (via .cursorrules and .antigravity/rules.md) to pre-embed a complete cognitive architecture in the repo.
When you open this project, your IDE stops being just an editorβit becomes an industry-savvy architect.
First principles:
- Minimize repetition: the repo should encode defaults so setup is nearly zero.
- Make intent explicit: capture architecture, context, and workflows in files, not tribal knowledge.
- Treat the IDE as a teammate: contextual rules turn the editor into a proactive architect, not a passive tool.
While building with Google Antigravity or Cursor, I found a pain point:
The IDE and models are powerful, but the empty project is too weak.
Every new project repeats the same boring setup:
- "Should my code live in
srcorapp?" - "How do I define utilities so Gemini recognizes them?"
- "How do I help the AI remember prior context?"
This repetition wastes creative energy. My ideal workflow is: after a git clone, the IDE already knows what to do.
So I built this project: Antigravity Workspace Template.
pip install antigravity-agent
antigravity init my-project
cd my-projectgit clone https://github.com/study8677/antigravity-workspace-template.git my-project
cd my-project
# 2. Run the installer
chmod +x install.sh
./install.sh
# 3. Configure your API keys
nano .env
# 4. Run the agent
source venv/bin/activate
python src/agent.pyWindows:
# 1. Clone the template
git clone https://github.com/study8677/antigravity-workspace-template.git my-project
cd my-project
# 2. Run the installer
install.bat
# 3. Configure your API keys (notepad .env)
# 4. Run the agent
python src/agent.py# 1. Clone the template
git clone https://github.com/study8677/antigravity-workspace-template.git my-project
cd my-project
# 2. Create virtual environment
python3 -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
# 3. Install dependencies
pip install -r requirements.txt
# 4. Configure your API keys
cp .env.example .env # (if available) or create .env manually
nano .env
# 5. Run the agent
python src/agent.pyThat's it! The IDE auto-loads configuration via .cursorrules + .antigravity/rules.md. You're ready to prompt.
This is not another LangChain wrapper. It's a minimal, transparent workspace for building AI agents that:
- π§ Have infinite memory (recursive summarization)
- π οΈ Auto-discover tools from
src/tools/ - π Auto-inject context from
.context/ - π Connect to MCP servers seamlessly
- π€ Coordinate multiple specialist agents
- π¦ Save outputs as artifacts (plans, logs, evidence)
Clone β Rename β Prompt. That's the workflow.
| Feature | Description |
|---|---|
| π§ Infinite Memory | Recursive summarization compresses context automatically |
| π§ True Thinking | "Deep Think" step using Chain-of-Thought prompts before acting |
| π Skills System | Modular capabilities as folders (src/skills/) with auto-loading (includes agent-repo-init) |
| π οΈ Universal Tools | Drop Python functions in src/tools/ β auto-discovered |
| π Auto Context | Add files to .context/ β auto-injected into prompts |
| π MCP Support | Connect GitHub, databases, filesystems, custom servers |
| π€ Swarm Agents | Multi-agent orchestration with Router-Worker pattern |
| β‘ Gemini Native | Optimized for Gemini 2.0 Flash |
| π LLM Agnostic | Use OpenAI, Azure, Ollama, or any OpenAI-compatible API |
| π Artifact-First | Convention-first workflow for storing plans, logs, and evidence in artifacts/ |
| π Sandbox Execution | Configurable code execution environments (local by default) |
Full documentation available in /docs/en/:
- Quick Start β Installation & deployment
- Philosophy β Core concepts & architecture
- Zero-Config β Auto tool & context loading
- MCP Integration β External tool connectivity
- Swarm Protocol β Multi-agent coordination
- Roadmap β Future phases & vision
The sandbox lets the agent execute generated Python code safely and consistently. It defaults to a local subprocess with isolation and limits.
SANDBOX_TYPE:local(default) |docker(opt-in) |e2b(future)SANDBOX_TIMEOUT_SEC: maximum execution time in seconds (default30)SANDBOX_MAX_OUTPUT_KB: truncate stdout/stderr to limit size (default10)
Docker (opt-in) extra variables:
DOCKER_IMAGE(defaultpython:3.11-slim)DOCKER_NETWORK_ENABLED(falseby default)DOCKER_CPU_LIMIT(default0.5cores)DOCKER_MEMORY_LIMIT(default256m)
Example:
export SANDBOX_TYPE=local
export SANDBOX_TIMEOUT_SEC=30
export SANDBOX_MAX_OUTPUT_KB=10
# Docker mode
# export SANDBOX_TYPE=docker
# export DOCKER_IMAGE=python:3.11-slim
# export DOCKER_NETWORK_ENABLED=false
# export DOCKER_CPU_LIMIT=0.5
# export DOCKER_MEMORY_LIMIT=256msrc/
βββ agent.py # Main agent loop
βββ memory.py # JSON memory manager
βββ mcp_client.py # MCP integration
βββ swarm.py # Multi-agent orchestration
βββ agents/ # Specialist agents
βββ tools/ # Your custom tools
βββ skills/ # Modular skills (Zero-Config)
.context/ # Knowledge base (auto-injected)
.antigravity/ # Antigravity rules
artifacts/ # Outputs & evidence
# src/tools/my_tool.py
def analyze_sentiment(text: str) -> str:
"""Analyzes the sentiment of given text."""
return "positive" if len(text) > 10 else "neutral"Restart agent. Done! The tool is now available.
The built-in agent-repo-init skill supports two modes:
quick: minimal clean scaffoldfull: scaffold + runtime profile defaults (.env, mission, context profile, init report)
You can run the portable script at skills/agent-repo-init/scripts/init_project.py:
python skills/agent-repo-init/scripts/init_project.py \
--project-name my-new-agent \
--destination-root /absolute/path/for/new/projects \
--mode quick
full mode example adds profile defaults:
python skills/agent-repo-init/scripts/init_project.py \
--project-name my-new-agent \
--destination-root /absolute/path/for/new/projects \
--mode full --llm-provider openai --enable-mcp --disable-swarm --enable-docker --init-git
Connect to external tools:
{
"servers": [
{
"name": "github",
"transport": "stdio",
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-github"],
"enabled": true
}
]
}Agent automatically discovers and uses all MCP tools.
Decompose complex tasks:
from src.swarm import SwarmOrchestrator
swarm = SwarmOrchestrator()
result = swarm.execute("Build and review a calculator")The swarm automatically:
- π€ Routes to Coder, Reviewer, Researcher agents
- π§© Synthesizes results
- π Exposes message logs via
get_message_log()for inspection
- β Phase 1-7: Foundation, DevOps, Memory, Tools, Swarm, Discovery
- β Phase 8: MCP Integration (fully implemented)
- π Phase 9: Enterprise Core (in progress)
- Added True Thinking: The agent now performs a real "Deep Think" step (Chain-of-Thought) before every action, generating a structured plan.
- Added Skills System: New
src/skills/directory allows for modular, folder-based agent capabilities (Docs + Code). - Added agent-repo-init skill: Initialize a clean, reusable repository from this template via
init_agent_repo. - Added local OpenAI-compatible backend support (e.g., Ollama) when no Google API key is provided.
- Fixed
.envloading so runs from thesrc/folder still read the project-root config. - CLI entrypoints (
agent.pyandsrc/agent.py) now accept tasks via argumentsAGENT_TASK.
See Roadmap for details.
Ideas are contributions too! Open an issue to:
- Report bugs
- Suggest features
- Propose architecture (Phase 9)
Or submit a PR to improve docs or code.
- @devalexanderdaza β First contributor. Implemented demo tools, enhanced agent functionality, proposed the "Agent OS" roadmap, and completed MCP integration.
- @Subham-KRLX β Added dynamic tools and context loading (Fixes #4) and the multi-agent cluster protocol (Fixes #6).
MIT License. See LICENSE for details.