Enhanced kubectl node information with automatic cloud provider detection and cloud-specific details.
- Single entry point: One command that works with any cloud provider
- Automatic detection: Automatically detects AWS, Azure, GCP, or generic clusters
- Rich information: Shows cloud-specific metadata like instance IDs, zones, and more
- Clean output: Well-formatted table output using the same style as kubectl
- kubectl plugin: Works as a standard kubectl plugin (
kubectl node) - Watch mode: Real-time monitoring with
-wflag - Context support: Use
--contextto specify kubectl context
- AWS: Shows instance ID, availability zone, and ASG information
- Azure: Shows instance type, resource group, and zone
- GCP: Shows instance ID, zone, node pool, and preemptible status
- Generic: Works with any Kubernetes cluster
# Clone the repository
git clone <repository-url>
cd kubectl-nodes-cloud
# Install directly with uv
uv tool install .
# Use the tool
kubectl-node# Clone the repository
git clone <repository-url>
cd kubectl-nodes-cloud
# Install in development mode
make install-dev
# Run tests
make testAfter installation, the tool automatically works as a kubectl plugin:
# Use as kubectl plugin (no symlink needed!)
kubectl node
# Watch mode as kubectl plugin
kubectl node -w# Show all nodes with cloud provider information
kubectl-node
# Or use as kubectl plugin
kubectl node# Use specific context
kubectl-node --context production
# List available contexts
kubectl-node --list-contexts
# Watch nodes in specific context
kubectl-node -w --context staging# Watch nodes with default 2-second refresh
kubectl-node -w
# Watch with custom refresh interval
kubectl-node -w --watch-interval 5
# Watch specific context
kubectl-node -w --context production --watch-interval 10
# As kubectl plugin
kubectl node -w --context stagingkubectl-node --help
Usage: kubectl-node [-h] [-w] [--watch-interval SECONDS] [--context CONTEXT] [--list-contexts] [--version]
Enhanced kubectl node information with cloud provider details
Options:
-h, --help show this help message and exit
-w, --watch Watch nodes and refresh display periodically
--watch-interval SECONDS
Refresh interval for watch mode (default: 2 seconds)
--context CONTEXT Kubectl context to use (default: current context)
--list-contexts List available kubectl contexts and exit
--version show program's version number and exitContext: production
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME INSTANCE-TYPE AWS-INSTANCE-ID AWS-ZONE AWS-ASG
ip-10-0-1-100.us-west-2.compute.internal Ready <none> 5d v1.28.0 10.0.1.100 54.123.45.67 Amazon Linux 2 5.4.0-1043-aws containerd://1.6.6 t3.medium i-1234567890abcdef0 us-west-2a
ip-10-0-2-200.us-west-2.compute.internal Ready master 5d v1.28.0 10.0.2.200 54.123.45.68 Amazon Linux 2 5.4.0-1043-aws containerd://1.6.6 t3.large i-0987654321fedcba0 us-west-2b
Watching nodes in context 'production' (press Ctrl+C to stop)...
Refresh interval: 2 seconds
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME INSTANCE-TYPE
k3s-control-plane-hel1-tfx Ready control-plane,etcd,master 442d v1.32.5+k3s1 10.253.0.101 65.21.55.95 openSUSE MicroOS 6.15.8-1-default containerd://2.0.5-k3s1.32 cax11
k3s-control-plane-nbg1-ibw Ready control-plane,etcd,master 442d v1.32.5+k3s1 10.254.0.101 49.13.229.173 openSUSE MicroOS 6.15.8-1-default containerd://2.0.5-k3s1.32 cax11
Context: production
Last updated: 2024-08-10 13:45:23
# List all available contexts
kubectl-node --list-contexts
# Output:
# Available contexts:
# * current-context
# staging
# production
# developmentkubectl-node-cloud/
├── kubectl_node/
│ ├── __init__.py # Main package
│ ├── main.py # Main entry point with CLI
│ ├── config.py # Configuration constants
│ ├── exceptions.py # Custom exceptions
│ ├── utils.py # Utility functions
│ └── providers/ # Cloud provider implementations
│ ├── __init__.py
│ ├── base.py # Base provider class
│ ├── aws.py # AWS provider
│ ├── azure.py # Azure provider
│ ├── gcp.py # GCP provider
│ ├── generic.py # Generic provider
│ └── manager.py # Provider manager
├── tests/ # Test suite
│ ├── __init__.py
│ ├── test_utils.py
│ ├── test_providers.py
│ ├── test_main.py
│ └── test_context.py # Context functionality tests
├── setup.py # Package setup
├── requirements.txt # Dependencies
├── run_tests.py # Test runner
├── Makefile # Development tasks
└── README.md # This file
# Install for development
make install-dev
# Run tests
make test
# Demo the tool
make demo
# Watch mode demo
make watch
# Test as kubectl plugin
make plugin-test
# Clean up
make clean# Run all tests
make test
# Or manually
python run_tests.py
# Run specific test file
python -m unittest tests.test_context
# Run with verbose output
python -m unittest -v tests.test_main- Create a new provider class in
kubectl_node/providers/ - Inherit from
BaseProvider - Implement the required methods:
detect(node): Return True if this provider matches the nodeget_provider_fields(node): Return dict of provider-specific fieldsget_additional_headers(): Return list of additional column headers
- Add the provider to
ProviderManagerinmanager.py - Add tests for the new provider
The codebase follows these principles:
- Single Responsibility: Each class/function has one clear purpose
- Provider Pattern: Cloud-specific logic is isolated in provider classes
- Error Handling: Comprehensive error handling with custom exceptions
- Testing: Unit tests for all major functionality
- Documentation: Docstrings and clear variable names
- Python 3.6+
- kubectl (configured and working)
- tabulate package (installed automatically)
Ensure kubectl is installed and in your PATH.
Make sure you have proper kubectl permissions to list nodes:
kubectl auth can-i list nodesVerify your kubectl context is set correctly:
kubectl config current-context
kubectl get nodesList available contexts and verify the name:
kubectl-node --list-contexts
kubectl config get-contextsAfter installation with uv tool install ., the kubectl-node command should be available in your PATH. The kubectl plugin functionality works automatically - no symlinks needed.
- Fork the repository
- Create a feature branch
- Make your changes
- Add tests for new functionality
- Run the test suite
- Submit a pull request
MIT License - see LICENSE file for details.