A kubectl plugin for migrating virtual machines to KubeVirt using Forklift.
kubectl-mtv helps migrate VMs from vSphere, oVirt, OpenStack, EC2, and OVA to Kubernetes/OpenShift using KubeVirt. It's a command-line interface for the Forklift project.
# Using krew
kubectl krew install mtv
# Or download from releases
# https://github.com/yaacov/kubectl-mtv/releasesSee Installation Guide for more options.
kubectl-mtv includes a built-in MCP (Model Context Protocol) server for AI agents that support MCP add‑ons, such as Cursor IDE and Claude Desktop.
See MCP Server Guide for detailed setup instructions and usage examples.
# vSphere
kubectl mtv create provider --name vsphere-01 --type vsphere \
--url https://vcenter.example.com \
--username admin --password secret --cacert @ca.certNetwork and storage mappings are created automatically with sensible defaults.
Use --network-pairs / --storage-pairs to override inline if needed.
# Using system defaults for best network and storage mapping
kubectl mtv create plan --name migration-1 \
--source vsphere-01 \
--vms vm1,vm2,vm3
# Overriding mappings inline
kubectl mtv create plan --name migration-1 \
--source vsphere-01 \
--vms vm1,vm2,vm3 \
--network-pairs "VM Network:default" \
--storage-pairs "datastore1:standard"kubectl mtv start plan --name migration-1# Interactive TUI with scrolling, help panel, and adjustable refresh
kubectl mtv get plans --vms --watchIf you need to reuse the same network/storage configuration across multiple plans, create named mappings and reference them:
# Network mapping
kubectl mtv create mapping network --name prod-net \
--source vsphere-01 --target openshift \
--network-pairs "VM Network:default,Management:openshift-sdn/mgmt"
# Storage mapping with enhanced features
kubectl mtv create mapping storage --name prod-storage \
--source vsphere-01 --target openshift \
--storage-pairs "datastore1:standard;volumeMode=Block;accessMode=ReadWriteOnce,datastore2:fast;volumeMode=Filesystem" \
--default-offload-plugin vsphere --default-offload-vendor flashsystem
# Reference them in a plan
kubectl mtv create plan --name migration-1 \
--source vsphere-01 \
--network-mapping prod-net \
--storage-mapping prod-storage \
--vms vm1,vm2,vm3For a complete walkthrough, see the Quick Start Guide.
Query and explore provider resources before migration:
# List VMs
kubectl mtv get inventory vms --provider vsphere-01
# Filter VMs by criteria
kubectl mtv get inventory vms --provider vsphere-01 --query "where memoryMB > 4096"
# List networks and storage
kubectl mtv get inventory networks --provider vsphere-01
kubectl mtv get inventory storages --provider vsphere-01See Inventory Management Guide for advanced queries and filtering.
For optimal VMware disk transfer performance, build a VDDK image from VMware's VDDK SDK:
# Build VDDK image
kubectl mtv create vddk-image \
--tar VMware-vix-disklib-8.0.1.tar.gz \
--tag quay.io/myorg/vddk:8.0.1
# Use it when creating a provider
kubectl mtv create provider --name vsphere-01 --type vsphere \
--url https://vcenter.example.com \
--vddk-init-image quay.io/myorg/vddk:8.0.1See VDDK Setup Guide for detailed instructions.
The built-in help system includes machine-readable output and reference topics for domain-specific query languages:
# Get help for any command
kubectl mtv help create plan
# Learn the TSL query language or KARL affinity syntax
kubectl mtv help tsl
kubectl mtv help karl
# Machine-readable command schema (JSON/YAML) for automation and AI agents
kubectl mtv help --machine
kubectl mtv help --machine --short get planSee Command Reference for the full help command documentation.
- Multi-Platform Support: Migrate from vSphere, oVirt, OpenStack, EC2, and OVA
- Auto-Mapping: Automatic network and storage mapping for all source providers
- Flexible Mapping: Use existing mappings, inline pairs, or automatic defaults
- Advanced Queries: Filter and search inventory with powerful query language
- VDDK Support: Optimized VMware disk transfers
- Real-time Monitoring: Track migration progress live
- Timezone-Aware Display: View timestamps in local time or UTC with
--use-utcflag - System Health Checks: Comprehensive health diagnostics for the MTV/Forklift system with actionable recommendations
- Settings Management: View and configure ForkliftController settings (feature flags, performance tuning, resource limits)
- Machine-Readable Help: Full command schema available as JSON/YAML for automation, MCP servers, and AI agents
MTV_VDDK_INIT_IMAGE: Default VDDK init image for VMware providersMTV_INVENTORY_URL: Base URL for inventory serviceMTV_INVENTORY_INSECURE_SKIP_TLS: Skip TLS verification for inventory service connections (set to "true" to enable)
Complete Technical Guide - Comprehensive documentation covering all features and use cases
- Installation & Prerequisites
- Quick Start Tutorial
- Provider Management
- Inventory Management
- Mapping Management
- Migration Plan Creation
- Migration Hooks
- MCP Server Integration
- Command Reference
Start the MCP server using docker or podman:
# Run the MCP server on port 8080
docker run --rm -p 8080:8080 \
-e MCP_KUBE_SERVER=https://api.cluster.example.com:6443 \
-e MCP_KUBE_TOKEN=sha256~xxxx \
quay.io/yaacov/kubectl-mtv-mcp-server:latest
# Run in read-only mode (disables write operations)
docker run --rm -p 8080:8080 \
-e MCP_KUBE_SERVER=https://api.cluster.example.com:6443 \
-e MCP_KUBE_TOKEN=sha256~xxxx \
-e MCP_READ_ONLY=true \
quay.io/yaacov/kubectl-mtv-mcp-server:latestThe server accepts the following environment variables:
| Variable | Default | Description |
|---|---|---|
MCP_HOST |
0.0.0.0 |
Listen address |
MCP_PORT |
8080 |
Listen port |
MCP_KUBE_SERVER |
Kubernetes API server URL | |
MCP_KUBE_TOKEN |
Bearer token for Kubernetes auth | |
MCP_KUBE_INSECURE |
Set to true to skip TLS verification |
|
MCP_CERT_FILE |
Path to TLS certificate (enables HTTPS) | |
MCP_KEY_FILE |
Path to TLS private key | |
MCP_OUTPUT_FORMAT |
text |
Default output format |
MCP_MAX_RESPONSE_CHARS |
0 |
Max response size (0 = unlimited) |
MCP_READ_ONLY |
false |
Set to true to disable write operations |
Build and test the container image with the MCP end-to-end test suite:
# Build the image (linux/amd64)
make image-build-amd64
# Run e2e tests against the container image
make test-e2e-mcp-image MCP_IMAGE=quay.io/yaacov/kubectl-mtv-mcp-server
# Run e2e tests against the local binary build
make test-e2e-mcp
# Run e2e tests against an already running server
MCP_SSE_URL=http://localhost:8080/sse make test-e2e-mcp-externalYou can also set MCP_IMAGE in e2e/mcp/.env (see e2e/mcp/env.example) and
use CONTAINER_ENGINE to choose between docker and podman.
Deploy the MCP server directly to OpenShift:
# Deploy the MCP server (pod and service)
oc apply -f https://raw.githubusercontent.com/yaacov/kubectl-mtv/main/deploy/mcp-server.yaml
# Register the MCP server with OpenShift Lightspeed
oc patch olsconfig cluster --type merge \
-p "$(curl -s https://raw.githubusercontent.com/yaacov/kubectl-mtv/main/deploy/olsconfig-patch.yaml)"To remove the MCP server:
# Unregister from Lightspeed
oc patch olsconfig cluster --type json \
-p '[{"op":"remove","path":"/spec/mcpServers"},{"op":"remove","path":"/spec/featureGates"}]'
# Delete the MCP server resources
oc delete -f https://raw.githubusercontent.com/yaacov/kubectl-mtv/main/deploy/mcp-server.yamlSee MCP Server Guide for more details on OpenShift integration.
Apache-2.0