Autonomous AI agent that installs ANYTHING on any system with zero manual intervention.
Handles system packages, npm/pip/cargo packages, API configurations, environment setup, VM deployments, and more.
git clone https://github.com/anombyte93/auto-installer.git
cd auto-installerThen use with any AI tool below β¬οΈ
| Tool | Type | Installation | Status |
|---|---|---|---|
| Claude Code | CLI Skill | cp -r . ~/.claude/skills/auto-installer/ |
β Native |
| Open Codex | CLI Agent | See below | β Full |
| Gemini CLI | CLI Agent | See below | β Full |
| Grok CLI | CLI Agent | See below | β Full |
| OpenAI Codex | CLI Agent | See below | β Full |
| Continue.dev | IDE Extension | VSCode/JetBrains | β Full |
| Aider | CLI Agent | See below | β Full |
| ChatGPT CLI | CLI | N/A | β Full |
| Any AI with shell access | Custom | Paste SKILL.md | β Universal |
# Install as a skill
cp -r . ~/.claude/skills/auto-installer/
# Auto-activates when you say:
# "install docker"
# "configure gemini API in open-codex"
# "install pytest on my ubuntu VM"All CLI tools follow the same pattern:
cd auto-installer
# Start your AI CLI tool
open-codex # or
gemini-cli # or
grok-cli # or
codex # or
aider
# Tell the AI:
"Read SKILL.md and install [WHAT YOU WANT]"Tool-Specific Context Flags (if supported):
# Gemini CLI
gemini-cli --context SKILL.md
# Some tools auto-read files in directory
open-codex # Automatically detects SKILL.md- Install Continue extension in VSCode/JetBrains
- Open the auto-installer directory in your IDE
- Use Continue chat:
@SKILL.md install docker
Paste this prompt to any AI with shell access:
You are an autonomous software installer. Read the file SKILL.md in the current directory to understand your capabilities and workflow.
CRITICAL RULES:
1. Execute ALL commands yourself - NEVER give me manual instructions
2. Only ask for: passwords, disambiguation, or risky operation confirmations
3. Report "READY TO USE" only when fully installed and verified
Now install: [WHAT YOU WANT TO INSTALL]
β Auto-detects installation type (system pkg, npm, API config, etc.) β System packages - docker, postgresql, redis, nginx, etc. β Language packages - npm, pip, cargo, gem, composer, go β API configurations - Set API keys, create config files β Environment setup - PATH, .bashrc, .env files β VMs & remote servers - Auto-configures SSH, jump hosts β GPU drivers - NVIDIA drivers + nvidia-docker with reboot β Safety checkpoints - VM snapshots before changes β Error recovery - Automatic retry with fixes β Verification - Functional tests, not just binary checks
install docker
install node 20 and postgres 15
install rust
install npm package open-codex
install pytest and numpy
install ruby gem rails
configure gemini API in open-codex
set up my AWS credentials
create .env file with DATABASE_URL
install pytest on my ubuntu VM
install docker on staging and production VMs
install rust on gpu-server.company.com via bastion.company.com
install docker with GPU support and pytorch on my GPU server
- Package Managers: apt, brew, dnf, pacman, choco
- Examples: docker, postgresql, redis, nginx, git, curl, vim
- Python: pip, pip3, pipx, poetry
- Node.js: npm, yarn, pnpm, bun
- Ruby: gem, bundler
- Rust: cargo, rustup
- Go: go install, go get
- PHP: composer
- Perl: cpan
- Java: maven, gradle
- Version Managers: nvm, pyenv, rbenv
- CLI Tools: gh, kubectl, terraform, ansible, aws-cli
- IDEs: VSCode extensions (
code --install-extension) - Desktop Apps: snap, flatpak, AppImage, Homebrew Cask
- API Keys: Environment variables, config files
- Config Files: JSON, TOML, YAML creation/editing
- Environment: PATH, .bashrc, .zshrc, .env files
- SSH Keys: Generation and configuration
- Local system - Direct installation
- VMs - VirtualBox, VMware, Parallels (auto-detects)
- Remote servers - Via SSH
- Jump hosts - Multi-hop SSH configuration
- Containers - Docker, Kubernetes pods
- GPU Drivers: NVIDIA drivers + nvidia-docker
- ML/Data Science: PyTorch, TensorFlow, CUDA
- Language Runtimes: python, node, ruby, go, rust, java, php
- Parse - AI detects package type and target system
- Configure - Sets up SSH/VMs if needed
- Checkpoint - Creates VM snapshot or system backup
- Install - Uses appropriate package manager
- Verify - Runs functional tests (not just checks if binary exists)
- Report - Confirms "READY TO USE" with versions
- VM snapshots before any changes
- System state backups for remote servers
- Checkpoint at every critical step
- Automatic rollback on failures
- Never leaves systems in broken state
- Logs all actions for debugging
- Any OS: Linux, macOS, Windows/WSL
- For VMs: VirtualBox, VMware, or Parallels
- For Remote: SSH access (auto-configured if needed)
- For GPU: NVIDIA GPU (drivers auto-installed)
AI asks me to run commands manually:
- Emphasize: "Execute everything yourself. I cannot run commands."
- Add to prompt: "CRITICAL: Run all commands yourself via shell access"
AI doesn't read SKILL.md:
- Use: "Read the file SKILL.md then install XYZ"
- Or paste SKILL.md content directly into chat
Skill doesn't activate in Claude Code:
- Check it's in
~/.claude/skills/auto-installer/SKILL.md - Verify YAML frontmatter is valid
- Say explicitly: "use auto-installer skill to install XYZ"
Installation logs:
- Check:
/var/log/apt/,~/.npm/_logs/,~/.pip/ - All bash commands are logged by Claude/AI tools
- Use
historyto review executed commands
auto-installer/
βββ SKILL.md # Complete installation logic (AI reads this)
βββ README.md # This file (for humans)
βββ EXAMPLES.md # Real-world usage examples
βββ CONTRIBUTING.md # How to contribute
βββ CHANGELOG.md # Version history
βββ LICENSE # MIT License
install docker, node 20, postgresql, redis, and python testing tools
The AI will:
- Clarify ambiguities ("python testing tools" β pytest only? full suite?)
- Install everything in optimal dependency order
- Configure and start all services
- Verify each package works
install docker on prod-server.example.com
The AI will:
- Create system backups first
- Install packages with minimal downtime
- Start and verify services
- Keep backups until success confirmed
install docker with nvidia GPU support and pytorch with CUDA on gpu-server
The AI will:
- Detect GPU hardware
- Install NVIDIA drivers
- Reboot server and poll until back online
- Install nvidia-docker
- Install PyTorch with matching CUDA version
- Verify GPU access with test container
See CONTRIBUTING.md for guidelines on:
- Reporting bugs
- Suggesting features
- Submitting improvements
- Testing new scenarios
MIT License - Use freely with any AI tool
Core Principles:
- Correctness > Speed - Do it right, not fast
- Safety > Convenience - Checkpoints before changes
- Autonomy > User interaction - ZERO manual commands
- Transparency > Abstraction - Clear logs and reports
- Robustness > Simplicity - Handle edge cases
Design Goals:
- ZERO manual commands for the user
- Only ask for: passwords, disambiguation, risky confirmations
- Always verify installations actually work
- Create safety checkpoints before changes
- Recover from errors automatically
- Never leave systems in broken state
Made with β€οΈ for the AI coding community
Supports: Claude Code β’ Open Codex β’ Gemini CLI β’ Grok CLI β’ Continue.dev β’ Aider β’ and more