- Prepend new entries with:
# [YYYY-MM-DD] Dev Log: <Subject> - Why: <one-line reason> - What: <brief list of changes> - Result: <outcome/impact>
- Prepend new WIP within the
# WIPsection.- use
- [ ]for tasks,- [x]for completed items.
- use
- We write or explain to the damn point. Be clear, be super concise - no fluff, no hand-holding, no repeating.
- Be specific about what was done, why it was done, and any important context.
- Minimal markdown markers, no unnecessary formatting, minimal emojis.
- Reference issue numbers in the format
#<issue-number>for easy linking.
- Why: the repo had a decent landing page but still dumped too much context into one README and did not read like an organized OSS project
- What:
- rewrote
README.mdas the deva.sh front page instead of a giant mixed-purpose document - added
docs/index.md,docs/quick-start.md,docs/how-it-works.md,docs/philosophy.md,docs/authentication.md,docs/advanced-usage.md, anddocs/troubleshooting.md - revalidated the docs against real
--dry-runoutput instead of just--help - corrected the docs and CLI help to describe persistent containers as project-scoped shapes, not a naive single-container story
- fixed auth-specific persistent naming to include the agent and fixed Copilot
--dry-runso it no longer starts the proxy - retargeted the docs site config to
docs.deva.sh, added a GitHub Pages workflow path for the docs subdomain, and kept CI docs-build validation - added a nightly image workflow that resolves latest upstream tool versions and publishes
nightlyand dated nightly container tags without creating fake semver releases - factored version resolution into a shared script so nightly and tagged release images stop drifting, and removed the fake "commit during release workflow" step
- aligned
CHANGELOG.mdand contribution guidance with the new docs split
- rewrote
- Result: the repo now has an actual docs spine for onboarding, internals, auth, and advanced workflows, the documented behavior matches the observed runtime shape, docs can live on
docs.deva.sh, and both nightly and tagged image builds use one consistent version-resolution path instead of hand-wavy workflow divergence
- Why: the repo still looked half-finished in public, the installer lagged behind the actual agent set, and recent auth switching work exposed ugly mount behavior
- What:
- added
LICENSE,SECURITY.md, andCONTRIBUTING.md - rewrote
README.mdinto a cleaner OSS landing page with badges, quick start, auth, config-home, and security sections - fixed
install.shto installgemini.shandshared_auth.sh, and cleaned the installer output - fixed Claude
--auth-with api-keyto passANTHROPIC_AUTH_TOKENandANTHROPIC_BASE_URL - replaced credential backup/restore with auth-file overlay mounts, filtered junk from config-home fan-out, and stopped
--dry-runfrom writing files - fixed
workflows/RELEASE.mdto usedeva.shas the version source
- added
- Result: the repo now reads like an actual OSS project, fresh installs match the current feature set, and auth switching is less fragile ahead of the 0.9.2 release
- Why:
make versions-upexited 56 during GitHub API changelog fetch - GitHub API 403 rate limit (60/hour) from unauthenticated curl - What:
- Changed
fetch_github_releases()andfetch_recent_github_releases()inscripts/release-utils.shfromcurltogh apifor authenticated requests - All changelog fetch functions now fail gracefully with
{ echo "(fetch failed)"; return 0; }instead of|| return(was causingset -escript abort) - Added fallback in
load_versions()- network fetch failure uses current image version instead of empty string - Added pre-build version check in
scripts/version-upgrade.sh- warns about missing versions but proceeds with build
- Changed
- Result: Build script resilient to transient network failures and GitHub rate limits. Changelog display is best-effort, won't block builds.
Files changed:
scripts/release-utils.sh(lines 175, 221, 452, 480)scripts/version-upgrade.sh(lines 82-95)
- Why: Common dev workflow need - testing containers, building images, CI/CD simulation inside deva environments
- What: Auto-mount Docker socket (
/var/run/docker.sock) by default with graceful detection, opt-out via--no-dockerflag orDEVA_NO_DOCKER=1, quick permission fix (chmod 666) for deva user access - Result: DinD works out-of-box on Linux/macOS/WSL2, no manual socket mounting needed, aligns with YOLO philosophy (make it work, container is the boundary)
- Why: Users have multiple credential files, needed direct path support beyond predefined auth methods
- What:
--auth-with /path/to/creds.jsonnow works, auto-backup existing credentials, workspace session tracking in~/.config/deva/sessions/*.json - Result: Flexible credential switching, backward compatible with predefined methods (claude/api-key/bedrock/etc)
- Why: Per-invocation containers were slow, stateless, and clobbered each other; we wanted tmux-like persistence.
- What:
- Container naming settles on
deva-<parent>-<project>for the shared instance and--rmfor throwaway runs, avoiding cross-repo collisions. - Subcommands (
ps,attach,shell,stop,rm,clean) mirror docker/tmux; smart auto-select handles the single-container case;attachboots an agent,shelldrops into zsh. - Global mode (
-g) exposes containers outside the current tree while keeping local defaults sane; lifecycle keeps containers detached but exec-ready. - Cleanup: removed Linux-only flock, dead attach helpers, and stray comments to stay shellcheck-clean without breaking macOS.
- Container naming settles on
- Result: Containers now persist per project with faster warm starts, intuitive control flow, and no platform regressions.
- Why: Port mature multi-auth system from claude.sh to support different AI providers (Anthropic, OpenAI, AWS, Google, GitHub) across all agents.
- What:
- Design Decision: Agent-level auth (
deva.sh claude --auth-with bedrock) over global-level (deva.sh --auth-with bedrock claude) - Auth Method Naming:
claude=Claude.ai OAuth,chatgpt=ChatGPT OAuth,copilot=GitHub Copilot proxy (different API endpoints per agent) - Copilot Complexity: Claude uses Anthropic endpoints (
/v1/messages,ANTHROPIC_BASE_URL), Codex uses OpenAI endpoints (/v1/chat/completions,OPENAI_BASE_URL) - Auth Matrix: Claude supports claude/oat/api-key/bedrock/vertex/copilot; Codex supports chatgpt/api-key/copilot; copilot works via different proxy endpoints
- Implementation Plan: Each agent parses --auth-with, shared copilot proxy management, agent-specific env vars and endpoints
- Design Decision: Agent-level auth (
- Result: Agent-level auth with provider-specific implementations. Copilot proxy serves both Anthropic and OpenAI formats but agents configure different base URLs and env var namespaces.
- Uses ANTHROPIC_BASE_URL=http://localhost:4141
- Uses ANTHROPIC_API_KEY=dummy
- Hits endpoint: POST /v1/messages (Anthropic format)
- Proxy translates: Anthropic messages → OpenAI format → GitHub Copilot
- Uses OPENAI_BASE_URL=http://localhost:4141
- Uses OPENAI_API_KEY=dummy
- Hits endpoint: POST /v1/chat/completions (OpenAI format)
- Proxy handles: OpenAI format → GitHub Copilot (direct)
| Agent | claude | oat | api-key | bedrock | vertex | chatgpt | copilot |
|---|---|---|---|---|---|---|---|
| Claude | ✅ | ✅ | ✅ | ✅ | ✅ | ❌ | ✅ (Anthropic endpoints) |
| Codex | ❌ | ❌ | ✅ | ❌ | ❌ | ✅ | ✅ (OpenAI endpoints) |
- Why: Running agents should be trivial; mixing build concerns into the run wrapper created ambiguity (pull vs build, tag vs Dockerfile) and too many knobs. Teams need predictable per‑project defaults.
- What:
- Added
--profile/-p(canonical). - Rust maps to
ghcr.io/thevibeworks/deva:rust(same repo, tag-based). - Wrapper flags allowed before or after the agent; help and examples clarified.
- Reserve
-pfor deva; pass agent prompt flags after--(breaking but explicit). - Config: default XDG root
~/.config/devawith per-agent homes;-c DIRtreated as DEVA ROOT when it containsclaude/orcodex/. - Auto-link legacy creds into DEVA ROOT by default (
~/.claude*,~/.codex); disable with--no-autolink,AUTOLINK=false, orDEVA_NO_AUTOLINK=1. - Builder flags were WIP; not shipped. Removed from code; use Makefile targets (
make build,make build-rust) or explicitdocker buildinstead. - When an image:tag is missing, error now prints one‑liners per profile (Makefile + docker commands).
- Makefile: add
build-rust,buildx-multi-rust; bump CLI versions (Claude1.0.119, Codex0.39.0).
- Added
- Result: Zero‑thought startup, clean per‑project defaults via
.devaPROFILE, reproducible paths with fewer CLI options, clearer fixes when images are absent.
Context (whole view):
- The real problem
- Build concerns leaked into run UX; unclear precedence; too many flags.
- Lack of per‑project defaults led to ad‑hoc flags per run.
- What users actually need
- “deva” just runs; per‑project default profile; single explicit prepare step; actionable errors; reproducibility with pinned tags and Makefile targets.
- Better UX proposal (next)
preparesubcommand (pull tag; optionally build via env),.devaPROFILE first, auto-detect profile (Cargo.toml ⇒ rust),doctordiagnostics.
- Why: Transform claude-code-yolo from Claude-specific wrapper into unified multi-agent wrapper per #98. Enable Codex integration without breaking existing YOLO ergonomics.
- What: COMPREHENSIVE REFACTOR
- Architecture: Built pluggable agent system with
agents/claude.shandagents/codex.shmodules, unified dispatcherdeva.sh - Container Management: Project-scoped containers (
deva-<agent>-<project>-<pid>),--ps/--inspect/shellcommands with fzf picker - Config Evolution:
--config-home/-cmounts entire auth homes (.claude,.codex) to/home/deva, new.deva*config files with.claude-yolo*back-compat - Agent Safety: Auto-inject safety flags (
--dangerously-skip-permissionsfor Claude,--dangerously-bypass-approvals-and-sandboxfor Codex) - OAuth Protection: Strip conflicting
OPENAI_*env vars when.codex/auth.jsonis mounted to preserve OAuth sessions - Backward Compatibility:
claude-yolo→deva.sh claudeshim, deprecation warnings forclaude.sh/claudeb.sh - Documentation: Complete rewrite of README, CHANGELOG, install scripts to reflect deva.sh-first workflow
- Architecture: Built pluggable agent system with
- Result: MAJOR VERSION - claude-code-yolo is now "deva.sh Multi-Agent Wrapper". All legacy functionality preserved via shims, new multi-agent capabilities unlocked, Codex OAuth stable.
- Why: Add first-class support for GitHub Copilot (
copilot-api) as Anthropic-compatible backend for Claude Code (local + Docker), resilient behind proxies. - What:
- New
--auth-with copilotmode: token validation (saved orGH_TOKEN/GITHUB_TOKEN), local proxy lifecycle management. - Base URL wiring: local
ANTHROPIC_BASE_URL=http://localhost:4141; Dockerhttp://host.docker.internal:4141(+ entrypoint rewrite safety). - Proxy bypass: set
NO_PROXY/no_grpc_proxyto includelocalhost,127.0.0.1,host.docker.internalso 4141 calls skip HTTP/gRPC proxies. - Model defaults: auto-detect from
/v1/models; prefergpt-5-minifor fast, fallback togpt-4o-mini; main fallbackclaude-sonnet-4. - Docker: auto-pick models from host proxy when unset; pass via
-eto claude in container.
- New
- Result: Copilot proxy works reliably in both modes; sane defaults without manual env; no more proxy misroutes.
- Why: Migrate from lroolle org to thevibeworks, shorten Docker image name for cleaner registry
- What: Updated all references, Docker images, URLs across entire codebase, kept command name for backward compatibility
- Result: Clean migration with repo at thevibeworks/claude-code-yolo, Docker image at thevibeworks/ccyolo, command stays claude-yolo
Changes Made:
- Repo:
lroolle/claude-code-yolo→thevibeworks/claude-code-yolo - Docker:
ghcr.io/lroolle/claude-code-yolo→ghcr.io/thevibeworks/ccyolo - Command: Kept
claude-yolo(backward compatibility) - Project: Kept "Claude Code YOLO" title
Files Updated: Makefile, README.md, CLAUDE.md, claude.sh, claude-yolo, install.sh, Dockerfile, CHANGELOG.md, DEV-LOGS.md, scripts/, claude-yolo-pro/
Addresses issue #48.
Problem: Messy auth flags, poor environment handling, inconsistent Docker mounts.
Solution:
- Unified auth with
--auth-withpattern (claude|api-key|bedrock|vertex) - Proper environment var handling with
-eflag - Controlled auth directory mounting with explicit permissions
- Smart model name handling for each auth mode
Technical:
- Freed -v for Docker volume mounts (was conflicting with --vertex)
- Added model name translation for API key mode
- Implemented proper ARN generation for Bedrock
- Added environment detection for tools and auth status
Result: Clean auth system, proper env handling, secure mounts.
Problem: Users needed separate auth sessions for different projects and better environment variable handling.
Root Cause: Fixed path mounting made multi-project auth management difficult, no env var support in Docker mode.
Solution: Added --config flag for custom Claude config home and -e flag for environment variables.
Implementation:
--config ~/work-claudecreates and mounts custom config directory-e NODE_ENV=devor-e DEBUGpasses environment variables- Fixed npm-global path handling for claude user
- Standardized mount paths to
/home/claudeinstead of/root - Environment variable naming:
CLAUDE_YOLO_*→CCYOLO_* - Auth isolation: unset conflicting auth variables per mode
Benefits:
- ✅ Project isolation: Separate auth sessions per project
- ✅ Environment control: Full env var support in Docker mode
- ✅ Path consistency: All mounts to
/home/claude - ✅ Auth reliability: No cross-contamination between auth modes
Related: Issues #46, #45 (configuration management)
Status: ✅ COMPLETED
Problem: Command line arguments become unmanageable for complex setups with multiple volumes and environment variables.
Current Pain Point:
claude-yolo -v ~/.ssh:/root/.ssh:ro -v ~/Desktop/claude:/home/claude/.claude/ -v ~/.config/git:/home/claude/.config/git -v ../yolo-tools/scripts/barkme.sh:/home/claude/.local/bin/barkme.sh --continueRoot Cause Analysis:
- CLI limitations: Long command lines are hard to edit, share, version control
- Multi-container needs: Users want playwright services, MCP servers, other tools
- Team collaboration: Complex setups need to be shared across team members
- Missing configuration hierarchy: No project vs user vs local settings distinction
Proposed Solution: Docker Compose integration following Claude Code's settings pattern
Configuration Hierarchy (mirrors Claude Code's approach):
.claude/
├── claude-yolo.local.yml # Project-local (gitignored)
├── claude-yolo.yml # Project-shared (version controlled)
└── ~/.claude/claude-yolo.yml # User global
Multi-container Support:
# .claude/claude-yolo.yml
version: '3.8'
services:
claude:
image: ghcr.io/thevibeworks/ccyolo:latest
volumes:
- ~/.ssh:/root/.ssh:ro
- ${PWD}:${PWD}
depends_on:
- playwright
- mcp-server
playwright:
image: mcr.microsoft.com/playwright:v1.40.0-focal
ports: ["3000:3000"]
mcp-server:
image: custom/mcp-server:latest
ports: ["8080:8080"]Implementation Requirements:
- Auto-detection: Check for compose files in precedence order
- Backward compatibility: Keep CLI args for simple cases
- Multi-container orchestration: Full Docker Compose integration
- Settings coexistence: Respect existing
.claude/settings.jsonhandling
Benefits:
- ✅ Manageable configs: No more insane command lines
- ✅ Team collaboration: Share service definitions via git
- ✅ Multi-container: Enable complex development environments
- ✅ Familiar patterns: Follow Claude Code's settings hierarchy
- ✅ Version control: Compose files are easily tracked
Related Issues:
- Issue #24: Environment variable support (partially addresses)
- Issue #33: DevContainer support question (compose provides better solution)
Status: Analysis complete, ready for implementation
Problem: sudo claude-yolo fails with "usermod: UID '0' already exists" error.
Root Cause: Can't reassign existing UID 0 (root) to claude user.
Security Fix: Handle UID=0 and GID=0 independently to prevent root group assignment.
Solution: Use fallback UID/GID 1000 for proper file ownership with existing collision handling.
Status: ✅ COMPLETED - PR #22
Problem: CI failing with "Invalid OIDC token" after changing permissions to write.
Solution: Added explicit github_token: ${{ secrets.GITHUB_TOKEN }} to force direct token auth.
Cause: Write permissions trigger GitHub App auth by default, but no App configured.
Status: ✅ COMPLETED
Problem: Overcomplicated workflow with manual duplicate detection using GitHub CLI.
Solution: Adopted ChatGPT pattern with critical fixes:
pull_request_target→ enables secret access forANTHROPIC_API_KEY- Concurrency groups → automatic duplicate prevention
- Proper checkout ref → works for comment-triggered reviews
- Removed complex GitHub CLI duplicate detection logic
Result: 50% fewer lines, more reliable, follows GitHub best practices.
Status: ✅ COMPLETED
Problem: Startup messages were excessively verbose (65+ lines) with poor UX.
Solution: Clean headers with color-coded auth status, transparent volume listing, consistent branding.
Result: 65+ lines → ~10 lines with essential info only.
Status: ✅ COMPLETED
Problems Fixed:
- USE_NONROOT complexity eliminated: Removed 50+ lines of unnecessary code
- Always run as claude user (was already default behavior)
- Removed dead root mode code path from docker-entrypoint.sh
- Simplified UID/GID mapping logic
Results:
- ✅ Consistent trace syntax between local and Docker modes
- ✅ 50+ lines removed from docker-entrypoint.sh
- ✅ Always run as claude user for security and simplicity
Status: ✅ COMPLETED
Problem: Throughout the codebase, Claude CLI was being used incorrectly with '.' as a directory argument.
Root Cause: Claude CLI doesn't take a directory argument. According to claude --help, Claude:
- Starts an interactive session by default
- Automatically works in the current working directory
- Takes
[prompt]as an optional argument, not a directory path
Issues Fixed:
claude .→claude(the '.' was being passed as a prompt, not a directory)claude-yolo .→claude-yolo(no directory argument needed)- All help text examples showing incorrect usage patterns
Files Updated:
- claude.sh: Fixed 11 examples in help text
- claude-yolo: Fixed 4 examples in help text
- All documentation: Will need updating (README.md, CLAUDE.md, install.sh)
Impact: This explains why --trace . was showing version info instead of starting interactive mode - the '.' was being interpreted as a prompt argument to Claude.
Status: ✅ COMPLETED - Help text fixed, documentation needs updating
Problem: docker-entrypoint.sh incorrectly classified Dockerfile-installed files as "user-mounted" and provided poor environment information.
Issues Fixed:
- Incorrect file classification:
.oh-my-zsh,.zshrc,.local, etc. marked as "user-mounted" when installed by Dockerfile - Poor environment detection: Basic tool versions without context or organization
- Verbose logging noise: All container-installed files logged as if user-mounted
- Missing tool information: No detection of AWS CLI, GitHub CLI, Docker, etc. installed in container
Solution Implemented:
- Smart file classification: Distinguish Dockerfile-installed vs user-mounted files
- Enhanced environment detection: Show all development tools from Dockerfile (Python, Node.js, Go, Rust, AWS CLI, GitHub CLI, Docker)
- Organized verbose output: Categorized sections for Tools, Authentication, Configuration
- Appropriate logging levels: Container-installed files use
log_verbose, user-mounted uselog_entrypoint
Technical Implementation:
- Updated file classification in
/root/*handling with explicit categories - Enhanced
show_environment_info()with structured tool detection - Added authentication status detection (AWS, GCloud, GitHub tokens)
- Improved verbose logging organization with clear sections
Results:
- ✅ Accurate classification: Container vs user-mounted files properly identified
- ✅ Comprehensive tool info: All Dockerfile-installed tools detected and versioned
- ✅ Clean verbose output: Organized sections with relevant information
- ✅ Reduced noise: Container-installed files no longer logged as "user-mounted"
Status: ✅ COMPLETED
Problem: Inconsistent logging patterns scattered throughout claude.sh and docker-entrypoint.sh with mixed approaches to verbosity control.
Issues Fixed:
- Inconsistent patterns: Mix of
[ "$QUIET" != true ] && echo,[ "$VERBOSE" = true ] && echo, direct echo - Duplicate logic: Repeated verbosity checks throughout both scripts
- Poor maintainability: No centralized logging functions
- Inconsistent stderr usage: Some logs to stdout, others to stderr
Solution Implemented:
- Unified logging functions:
log_info(),log_verbose(),log_error(),log_warn() - Specialized functions:
log_auth(),log_model(),log_proxy(),log_entrypoint() - Consistent stderr routing: All logs go to stderr, keeping stdout clean
- Centralized flag handling: Single point of verbosity control per script
Technical Implementation:
- Added 6 core logging functions to both scripts
- Migrated 33+ logging patterns in claude.sh to unified system
- Migrated 20+ logging patterns in docker-entrypoint.sh with argument-based detection
- Updated documentation across README.md, CLAUDE.md, CHANGELOG.md
- Maintained backward compatibility
Results:
- ✅ Consistent API: All logging through standardized functions
- ✅ Clean migration: Drop-in replacements for existing patterns
- ✅ Proper flag handling: Centralized QUIET/VERBOSE logic
- ✅ Maintainable code: Eliminated duplicate logging logic
- ✅ Enhanced UX: Clean, controllable output at all verbosity levels
Status: ✅ COMPLETED
Problem: Current --version and startup messages are excessively verbose, poor UX.
Issues Fixed:
- --version chaos: Shows full container startup + environment info + linking messages
- Startup noise: 30+ lines of environment info, entrypoint messages, linking details
- Poor expectations: Users expect clean, fast version info
Solution Implemented:
- --version: Clean local version only ("Claude Code YOLO v0.2.0")
- --version --verbose: Extended info including Claude CLI version via container check
- Startup: Two-line summary with key info:
Claude Code YOLO v0.2.0 | Auth: OAuth | Working: /path/to/project Container: ccyolo-myproject-12345 - Flags: Added --quiet and --verbose for user control over output verbosity
Technical Implementation:
- Two-pass argument parsing: collect --verbose/--quiet flags first
- Conditional message display based on verbosity flags
- Docker entrypoint checks for --quiet/--verbose in arguments
- Clean auth method display mapping (claude → OAuth)
Results:
- ✅ --version: Single line output (was 30+ lines)
- ✅ --version --verbose: Extended info when needed
- ✅ Startup: Two-line summary (was verbose environment dump)
- ✅ Control flags: --quiet and --verbose work in both local and Docker modes
Status: ✅ COMPLETED
Problem: Inconsistent claude-trace syntax and unnecessary USE_NONROOT complexity.
Solutions Implemented:
- Fixed claude.sh:305 claude-trace syntax (removed "claude" argument)
- Removed USE_NONROOT variable and dead root mode code
- Simplified docker-entrypoint.sh by 50+ lines
- Always run as claude user for consistency
Result: Cleaner, more maintainable codebase with consistent behavior.
Status: ✅ COMPLETED
Problems Identified:
-
Inconsistent claude-trace syntax:
- claude.sh:305 (local):
--run-with claude .❌ - claude.sh:648 (docker):
--run-with .✅
- claude.sh:305 (local):
-
USE_NONROOT unnecessary complexity:
- Always set to
truein Docker mode (line 512) - Root mode code path is dead code (lines 246-275 in docker-entrypoint.sh)
- Adds 100+ lines of UID/GID mapping, symlink creation
- No real benefit since we always use non-root anyway
- Always set to
-
Cursor bot was wrong:
- Current docker-entrypoint.sh logic is actually correct
- Transforms:
--run-with .→--run-with --dangerously-skip-permissions . - Bot confused about argument order
Solutions:
- Fix local mode claude-trace syntax (remove "claude")
- Remove USE_NONROOT entirely, always run as claude user
- Simplify docker-entrypoint.sh by 50+ lines
Status: Analysis complete
Problem: Cursor bot detected critical bugs in claude-yolo argument parsing.
Root Cause Analysis:
Bug 1 - Infinite Loop: Lines 84-89 in parse_args() missing shift statements:
--inspect)
inspect_container # ❌ Missing shift - infinite loop
;;
--ps)
list_containers # ❌ Missing shift - infinite loop
;;Bug 2 - Duplicate Handling: Lines 122-137 duplicate parse_args() logic:
# Main script also handles --inspect/--ps directly
case "$1" in
--inspect) inspect_container ;; # ❌ Duplicate of parse_args
--ps) list_containers ;; # ❌ Duplicate + no exitImpact:
- Infinite loop when using
--inspector--ps --psshows containers but continues to exec claude.sh- Mixed options like
claude-yolo --inspect -v ~/foo:/barsilently ignore -v - Inconsistent behavior between direct calls and mixed arguments
Technical Details:
- Flow Issue: parse_args() calls inspect_container() → exits, but missing shift causes loop
- Design Flaw: Two separate parsing paths with different behaviors
- Silent Failures: Some argument combinations work, others don't
Status: Critical - requires immediate fix
Problem: Mounting ~/.config/gh/ doesn't work for GitHub CLI authentication in containers.
Root Cause: Modern gh uses secure keyring storage instead of plain text files:
- Host: Tokens stored in macOS Keychain/Linux Secret Service/Windows Credential Manager
- Container: No keyring access, auth fails even with mounted config directory
- Split State: Config files present but tokens inaccessible
Technical Details:
# Host auth state:
~/.config/gh/config.yml # Configuration
~/.config/gh/hosts.yml # May contain tokens OR keyring references
System Keyring # Actual tokens (secure storage)
# Container reality:
/root/.config/gh/config.yml # ✅ Mounted successfully
/root/.config/gh/hosts.yml # ✅ Mounted but may reference unavailable keyring
No System Keyring # ❌ DBus/keyring services not availableWhy This Matters: Current codebase has complete auth system for Claude/AWS/GCloud but GitHub CLI missing.
Immediate Impact: Cannot create PRs or manage GitHub repos from within containers.
Solutions Research:
- Environment Variable:
GH_TOKEN="ghp_xxx"- simple, headless-friendly - Insecure Storage:
gh auth login --insecure-storageon host, then mount works - Token Injection:
echo $TOKEN | gh auth login --with-tokenin container - Mount Strategy: Add explicit GitHub CLI auth mounting to claude.sh
Status: Research complete, need implementation decision.
Problem: Symlinking all /root/* was too broad and risky.
Better approach: Explicit, controlled mounts with proper permissions:
# claude.sh mounts:
~/.claude → /root/.claude # read-write (auth tokens)
~/.config → /root/.config:ro # read-only (XDG tools)
~/.aws → /root/.aws:ro # read-only
~/.ssh → /root/.ssh:ro # read-only
~/.gitconfig → /root/.gitconfig:ro # read-only
# docker-entrypoint.sh:
- Symlinks specific directories to /home/claude
- Sets XDG_CONFIG_HOME=/root/.config
- Maintains controlled access listBenefits:
- ✅ Security: Read-only where appropriate
- ✅ XDG compliance: Entire .config dir for gh/gcloud/etc
- ✅ Explicit: Clear what's accessible
- ✅ Safe: No unexpected file exposure
Status: -> IMPLEMENTED
Problem: Auth flags conflict with common conventions (-v for volumes vs Vertex).
Current mess:
-c/--claude→ Claude app (OAuth)-a/--api-key→ Anthropic API-b/--bedrock→ AWS Bedrock-v/--vertex→ Google Vertex AI (blocks -v for volumes!)
Solution: Single --auth-with parameter:
claude.sh --auth-with vertex . # Explicit auth method
claude.sh -v ~/.ssh:/root/.ssh . # -v now free for volumesImplementation:
- ✅ Added
--auth-with METHODparsing in claude.sh - ✅ Kept old flags for backward compatibility (with deprecation warnings)
- ✅ Freed up
-vfor volume mounting (Docker convention) - ✅ Updated claude-yolo to use
-vinstead of--mount
Benefits:
- ✅ Follows Docker convention (-v for volumes)
- ✅ Cleaner, extensible auth interface
- ✅ No more flag conflicts
- ✅ Better CLI UX
Status: ✅ IMPLEMENTED
Problem: Hardcoding each tool's config mount doesn't scale.
Root cause: Mount to /root, run as claude user -> symlink hell.
Initial Proposal: Mount entire ~/.config, use XDG standards.
Implemented Solution: Added flexible volume mounting via -v argument in claude-yolo.
# New usage - users can mount any config they need:
claude-yolo -v ~/.gitconfig:/root/.gitconfig .
claude-yolo -v ~/.ssh:/root/.ssh:ro .
claude-yolo -v ~/tools:/tools -v ~/data:/data .
# Implementation in claude-yolo:
- Parse -v/--mount arguments, collect in array
- Pass to claude.sh via CLAUDE_EXTRA_VOLUMES env var
- claude.sh adds these volumes to Docker run commandBenefits:
- ✅ Flexible: Mount any config/directory as needed
- ✅ Familiar: Uses Docker's -v syntax
- ✅ Secure: Users control what to expose
- ✅ Extensible: No hardcoded tool list to maintain
Result: Zero maintenance. New tools work via explicit mounting.
Status: ✅ IMPLEMENTED - Added -v/--mount support to claude-yolo
Problem: claude-yolo --trace . fails to add --dangerously-skip-permissions to the claude command.
Root Cause Found:
- In
claude.sh:562, when--traceis used, the command was incorrectly constructed as:claude-trace --include-all-requests --run-with . - Missing
claudecommand: Should beclaude-trace --include-all-requests --run-with claude .
Two-Part Fix Implemented:
-
Fixed command construction in
claude.sh:562:claude-trace --include-all-requests --run-with claude . -
Enhanced argument injection in
docker-entrypoint.sh:elif [ "$cmd" = "claude-trace" ]; then # claude-trace --include-all-requests --run-with claude [args] # Inject --dangerously-skip-permissions after "claude" while parsing args; do if [ "${args[$i]}" = "--run-with" ] && [ "${args[$((i+1))]}" = "claude" ]; then new_args+=("--run-with" "claude" "--dangerously-skip-permissions") fi done
Result:
- Input:
claude-yolo --trace . - Command:
claude-trace --include-all-requests --run-with claude . - Executed:
claude-trace --include-all-requests --run-with claude --dangerously-skip-permissions .
Status: ✅ FIXED - Two-part fix ensures proper command structure and flag injection
Problem: All dev tools are baked into Dockerfile, requiring full image rebuild for new tools.
Current State:
- Tools installed in Dockerfile:92-117 (gh, delta, claude, claude-trace)
- Static installation makes customization inflexible
- Image size grows with every tool added
- No runtime tool management
Solution Options:
# Environment-driven installation in entrypoint
CLAUDE_INSTALL_PACKAGES="gh,terraform,kubectl"Pros: Maximum flexibility, smaller base image Cons: Slower startup, network dependency, caching complexity
# .claude-tools.yml in project
tools:
- gh
- terraform
- kubectlPros: Project-specific tools, version control Cons: Added complexity, manifest management
FROM thevibeworks/ccyolo:base
RUN install-tool gh terraform kubectlPros: Docker-native, cacheable layers Cons: Multiple image variants, registry complexity
# In entrypoint, detect and install via various PMs
[ -f requirements.txt ] && pip install -r requirements.txt
[ -f package.json ] && npm install -g $(jq -r '.globalDependencies[]' package.json)Pros: Leverages existing ecosystem patterns Cons: Multiple package manager complexity
Recommendation: Start with Option 1 (runtime installation) with intelligent caching.
Problem: While Claude auth is seamlessly handled via ~/.claude mounting, other dev tools require manual auth setup inside the container.
Current Auth State:
- ✅ Claude: Auto-mounted via
~/.claude→/root/.claude→/home/claude/.claude(symlink) - ✅ AWS: Auto-mounted via
~/.aws→/root/.aws→/home/claude/.aws(symlink) - ✅ Google Cloud: Auto-mounted via
~/.config/gcloud→/root/.config/gcloud→/home/claude/.config/gcloud(symlink) - ❌ GitHub CLI: Requires manual
gh auth loginor token pasting into/home/claude/.config/gh/ - ❌ Docker Hub: No auth mounting for
docker login - ❌ Terraform: No auth mounting for
.terraform.d/credentials - ❌ NPM: No auth mounting for
.npmrc
Impact: Inconsistent developer experience - some tools work seamlessly, others require manual setup.
Solution Options:
# In claude.sh, add more auth directories
[ -d "$HOME/.config/gh" ] && DOCKER_ARGS+=("-v" "$HOME/.config/gh:/root/.config/gh")
[ -f "$HOME/.npmrc" ] && DOCKER_ARGS+=("-v" "$HOME/.npmrc:/root/.npmrc")
[ -d "$HOME/.docker" ] && DOCKER_ARGS+=("-v" "$HOME/.docker:/root/.docker")
[ -d "$HOME/.terraform.d" ] && DOCKER_ARGS+=("-v" "$HOME/.terraform.d:/root/.terraform.d")Pros: Consistent with current approach, minimal complexity Cons: Hard-coded tool list, doesn't scale
# Mount entire config directories
DOCKER_ARGS+=("-v" "$HOME/.config:/root/.config")
DOCKER_ARGS+=("-v" "$HOME/.local:/root/.local")Pros: Catches all XDG-compliant tools automatically Cons: Over-broad mounting, potential security concerns
# Auto-detect and mount known auth files/dirs
AUTH_PATHS=(
".config/gh" # GitHub CLI
".docker" # Docker Hub
".terraform.d" # Terraform
".npmrc" # NPM
".pypirc" # PyPI
".cargo" # Rust Cargo
)Pros: Balanced approach, extensible list Cons: Requires maintenance of auth path list
# Pass auth tokens as environment variables
[ -n "$GH_TOKEN" ] && DOCKER_ARGS+=("-e" "GH_TOKEN=$GH_TOKEN")
[ -n "$DOCKER_PASSWORD" ] && DOCKER_ARGS+=("-e" "DOCKER_PASSWORD=$DOCKER_PASSWORD")
[ -n "$NPM_TOKEN" ] && DOCKER_ARGS+=("-e" "NPM_TOKEN=$NPM_TOKEN")Pros: Secure, doesn't require file system access Cons: Token-based only, doesn't work for OAuth flows
Recommendation: Combine Option 3 (selective mounting) with Option 4 (env var pass-through) for comprehensive auth support.
Files Affected:
claude.sh:354-376(current auth mounting logic)docker-entrypoint.sh:91-127(symlink creation for claude user)
Original Problem: Users wanted multiple Claude instances in same project without container name conflicts.
Original Goal Misunderstanding: We thought users wanted shared containers, but they actually just wanted multiple simultaneous instances.
Simple Solution Implemented:
- Reverted to process-based naming:
claude-code-yolo-${CURRENT_DIR_BASENAME}-$$ - Keep
--rmfor auto-cleanup: Each instance gets its own container - No complexity needed: Each process gets unique container name via
$$
Result:
# Terminal 1:
claude-yolo . # → claude-code-yolo-myproject-12345
# Terminal 2:
claude-yolo . # → claude-code-yolo-myproject-67890
# Both run simultaneously, both auto-cleanupWhy This Works Better:
- ✅ Simple: No shared state, no daemon logic, no container reuse
- ✅ Isolated: Each Claude instance in its own container
- ✅ Clean: Containers auto-remove with
--rm - ✅ Scalable: Run as many instances as needed
Key Insight: Sometimes the simplest solution (unique names per process) is better than complex shared container architecture.
Status: ✅ RESOLVED - Ultra-simple solution implemented
Problem: Container inspection workflow was cumbersome - required multiple steps to access running containers.
Original Workflow Pain Points:
- Manual discovery: Must run
docker ps→ find container → copy name - Multi-step access:
docker exec -it <name> /bin/zsh→su - claudeto get proper user context
Solution Implemented: Added inspection shortcuts to claude-yolo wrapper
Features Added:
claude-yolo --inspect: Auto-find and enter container as claude userclaude-yolo --ps: List all containers for current project- Smart selection: Auto-select single container, prompt for multiple
- Project-aware: Only shows containers matching current directory pattern
Implementation Details:
# Container discovery by pattern
CONTAINER_PATTERN="claude-code-yolo-${CURRENT_DIR_BASENAME}-"
find_project_containers() {
docker ps --filter "name=$CONTAINER_PATTERN" --format "{{.Names}}" 2>/dev/null
}
# Smart container selection
if [ $num_containers -eq 1 ]; then
# Auto-select single container
exec docker exec -it "$container" gosu claude /bin/zsh
else
# Prompt user to choose from multiple
echo "Multiple containers found for this project:"
# ... interactive selection
fiUser Experience:
Before (painful):
docker ps # Find container
docker exec -it claude-code-yolo-proj-12345 /bin/zsh # Enter container
su - claude # Switch to proper userAfter (one command):
claude-yolo --inspect # Auto-find + auto-su to claude userMultiple Container Support:
claude-yolo --inspect
Multiple containers found for this project:
1) claude-code-yolo-myproject-12345 (Up 5 minutes)
2) claude-code-yolo-myproject-67890 (Up 2 minutes)
Select container to inspect (1-2): 1
Entering container claude-code-yolo-myproject-12345 as claude user...Files Modified:
claude-yolo: Enhanced from simple 22-line wrapper to 75-line tool with container management- Added help system with
claude-yolo --help
Status: ✅ COMPLETED - Issue #4 resolved