ZSH Configuration Repository - Documentation for Claude Code (claude.ai/code)
Modular ZSH configuration system originated from Sebastian Tramp's configuration, customized by Hemant Verma. Features centralized logging, platform-specific configurations, and extensive custom scripts system.
make install # Full installation with symlinks
make mac # Complete macOS setup
make github-setup # Configure Git
make pytorch-setup # Setup PyTorch models for image upscaling
make update # Update repository and submoduleslogging.zsh- Centralized logging (loaded first for universal access)environment.zsh- Environment variables and PATH managementoptions.zsh- Shell options and settingsprompt.zsh- Prompt configurationfunctions.zsh- Custom functions and key bindingsaliases.zsh- Command aliases and suffix handlers- Platform-specific -
darwin.zsh(macOS) orlinux.zsh - Application configs -
git.zsh,rails.zsh,claude.zsh, etc. completion.zsh- Tab completion setupprivate.zsh- User-specific private configurations
- Modular Design - Each feature area has dedicated
.zshfile - Platform Detection - Automatic OS-specific configuration loading
- External Dependencies -
zsh-syntax-highlightingsubmodule - Path Management - Automated setup for Python, Node, Conda, Ruby, etc.
workspace # cd ~/workspace
latest-dir # Enter most recently created directory
path <pattern> # Find files matching pattern
buf <file> # Backup file with timestamp
massmove # Batch rename files interactivelykill-port <port> # Kill process on specific port
kill-grep <pattern> # Kill processes matching pattern
clean-pyc # Remove .pyc files recursively
fix-pep8 # Auto-fix Python PEP8 in staged filesAPI keys enable enhanced functionality but aren't required.
Claude (Anthropic):
setup-claude-key "sk-ant-api03-your-key-here"
# Stores in ~/.claude/anthropic_api_keyGemini (Google):
setup-gemini-key "AIzaSyYour-gemini-key-here"
# Stores in ~/.gemini/api_keyAvailable Functions:
claude/cc- Claude Code CLI with auto-loaded API keygemini-cli/gg- Gemini CLI with auto-loaded API key
Multi-language CLI system with Ruby, Python, and Rust components, sharing common utilities and base classes.
Architecture: Language-specific directories in bin/ with shared utilities in bin/.common/
/Users/hemantv/zshrc/bin/SCRIPTS.md first to understand:
- Available base classes (ScriptBase, InteractiveScriptBase, etc.)
- Existing utilities (Logger, System, ErrorUtils, etc.)
- Services you can reuse (LLMService, FileCache, etc.)
- Common patterns and best practices
Available immediately in shell - for frequent use
calibre-update # Update Calibre e-book manager
stack-monitors # Configure stacked monitor setup
merge-pdf # Merge multiple PDF files
dropbox-backup # Move directories to Dropbox with symlinks
uninstall-app # Comprehensive application uninstaller
xcode-icon-generator # Generate app icons for Xcode projects
largest-files # Find largest files respecting .gitignore patterns
list-scripts # Show all available scriptsControlled access for system configuration
make macos-optimize # Optimize macOS developer settings
make claude-setup # Setup Claude Code configuration
make gemini-setup # Setup Gemini CLI configuration
make xcode-backup # Backup Xcode essential settings
make vscode-backup # Backup VS Code settings
make iterm-backup # Backup iTerm2 configurationInternal repository tools
make find-orphans # Find orphaned Makefile targetsoriginal_working_dir instead of Dir.pwd to ensure they run in the user's original working directory, not the ruby-cli directory. The script system automatically sets ORIGINAL_WORKING_DIR and provides an original_working_dir method in ScriptBase.
/Users/hemantv/zshrc/bin/SCRIPTS.md before writing any script. This comprehensive documentation covers:
- All available base classes and utilities
- Complete service catalog with usage examples
- Helper modules and mixins
- Common patterns and best practices
- Quick reference guide for "when you need to..."
Install dependencies:
make ruby-gemsAvailable gems: tty-prompt, tty-progressbar, pastel, sqlite3, rexml, chunky_png, oily_png
Script template (see SCRIPTS.md for complete patterns):
#!/usr/bin/env ruby
require_relative '.common/script_base'
class MyUtilityScript < ScriptBase
def script_emoji; '🔧'; end
def script_title; 'My Utility Tool'; end
def script_description; 'Does something useful'; end
def script_arguments; '[OPTIONS] <arguments>'; end
def run
log_banner(script_title)
# Implementation here
show_completion(script_title)
end
end
MyUtilityScript.execute if __FILE__ == $0For detailed patterns and examples, consult /Users/hemantv/zshrc/bin/SCRIPTS.md
Core Functions:
log_success "Operation completed" # Green + ✅
log_error "Failed to find file" # Red + ❌ (stderr)
log_warning "Backup recommended" # Yellow + ⚠️
log_info "Checking requirements" # Blue + ℹ️
log_progress "Processing data" # Cyan + 🔄
log_section "Configuration" # Magenta + 🔧Specialized Functions:
log_file_created "/path" # 📄 File operations
log_install "package" # 📦 Installation
log_brew "Installing tools" # 🍺 Homebrew
log_git "Committing changes" # 🐙 Git operationsUsage in scripts:
# Source logging in bash scripts
source "$ZSH_CONFIG/logging.zsh"
log_info "Script started"When to use Gemini:
- Analyzing entire codebases (>100KB)
- Verifying implementations across multiple files
- Understanding project-wide patterns
- Context exceeds Claude's limits
Modular PyTorch inference framework for image processing with support for multiple model types. Located in bin/python-cli/ package with CLI script at bin/pytorch_inference.py.
PyTorch Models Setup Script:
Automated setup for PyTorch models with Apple Silicon CoreML conversion. Uses external configuration files for easy maintenance.
make pytorch-setup # Run PyTorch models setup scriptConfiguration Files:
scripts/requirements.txt- Python dependencies for PyTorch environmentscripts/pytorch-models.json- Model definitions with URLs and descriptions
Setup Process:
- Creates isolated Python environment in
~/.config/zsh/.models/venv - Installs dependencies from requirements.txt
- Downloads PyTorch models from JSON configuration
- Converts models to CoreML format for Apple Silicon optimization
- Generates configuration file with available models
Adding New Models:
Update scripts/pytorch-models.json:
{
"ModelName": {
"url": "https://example.com/model.pth",
"filename": "model.pth",
"description": "Model description"
}
}- Smart Auto-Optimization: Automatically determines optimal tile size, batch size, and worker count based on image size and device capabilities
- Multi-Device Support: CUDA GPU, Apple Silicon (MPS), and CPU with automatic device detection
- Memory-Efficient Processing: Tiled inference and streaming mode for large images
- Extensible Architecture: Easy to add new model types beyond ESRGAN
Basic Usage (Recommended - Auto-Optimized):
python pytorch_inference.py --input image.jpg --output result.jpg --model model.pthAdvanced Usage:
# Override specific parameters
python pytorch_inference.py --input image.jpg --output result.jpg --model model.pth --tile 256
# Manual control
python pytorch_inference.py --input image.jpg --output result.jpg --model model.pth --tile 512 --batch-size 4 --workers 2
# Different scale factors
python pytorch_inference.py --input image.jpg --output result.jpg --model model.pth --scale 2The system analyzes image dimensions and available device memory to determine optimal parameters:
CUDA GPU:
- Larger tiles (256-1024px) for better parallelization
- Higher batch sizes (2-8)
- Multiple workers (2-4) based on image size
Apple Silicon (MPS):
- Moderate tiles (100-400px) due to memory constraints
- Smaller batches (1-4)
- Single worker to prevent memory issues
CPU:
- Smaller tiles (75-350px) to avoid memory pressure
- Small batches (1-4)
- Single worker to avoid oversubscription
python_cli/utils.py - Base inference framework:
BaseImageInferenceclass for generic PyTorch models- Device detection and memory optimization
- Tiled and streaming inference methods
- Image preprocessing and postprocessing
python_cli/esrgan.py - ESRGAN-specific implementation:
- RRDBNet architecture definitions
- ESRGAN model loading logic
- Factory methods for easy instantiation
pytorch_inference.py - CLI interface:
- Command-line argument parsing
- Model type selection
- Error handling with helpful suggestions
To add support for new PyTorch models:
- Create model class in
python_cli/my_model.py - Inherit from
BaseImageInference - Implement
load_model()method - Update CLI script to support new model type
from python_cli.utils import BaseImageInference
class MyModelInference(BaseImageInference):
def load_model(self, model_path):
# Custom model loading logic
passThe framework includes automatic memory management:
- Estimates memory requirements based on image size and scale factor
- Falls back to streaming mode for very large images
- Provides helpful error messages with optimization suggestions
- Handles device-specific memory constraints
Custom aliases: lg (log), cp (cherry-pick), ri (rebase interactive), rc (rebase continue), pushf (force push)
Configuration includes:
- Rebase editor setup
- SSH key management
- Push behavior defaults
ZSH_CONFIG- Points to ~/.config/zshEDITOR- Set to "vim"PATH- Extended for development tools (Python, Node, Ruby, etc.)
- Function naming: Use kebab-case (
my-functionnotmy_function) - Logging: Always use centralized logging functions, never raw
echo - Colors: Use logging functions for consistent emoji + color output
- Error handling: Use
set -euo pipefailin bash scripts
For utility scripts (Ruby preferred):
- Create
bin/my-script.rbusing ScriptBase template - Add wrapper function in
bin/scripts.zsh - Update
list-scriptsfunction
For setup/backup scripts (bash):
- Create
bin/my-script.sh - Add Makefile target only (no wrapper function)
- Source logging functions
├── zshrc # Main configuration entry
├── logging.zsh # Centralized logging (loaded first)
├── environment.zsh # Environment variables
├── [other core .zsh files]
├── bin/ # Custom scripts system
│ ├── scripts.zsh # Wrapper functions
│ ├── .common/ # Ruby utilities
│ └── [script files]
├── functions.d/ # Completion functions
├── Settings/ # Application backups
└── zsh-syntax-highlighting/ # External dependency
The configuration uses symlinks to ~/.config/zsh/ allowing easy updates while maintaining customizations in private.zsh.
Key installation targets:
make install- Full setup with symlinksmake mac- macOS-specific setup with Homebrewmake github-setup- Git configurationmake update- Update repository and submodules