Skip to content
Open
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
132 changes: 85 additions & 47 deletions docs/advanced/addons/vmimport.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,36 +10,50 @@ title: "VM Import"

_Available as of v1.1.0_

Beginning with v1.1.0, users can import their virtual machines from VMWare and OpenStack into Harvester.
Beginning with v1.1.0, users can import their virtual machines from VMWare and
OpenStack into Harvester.

This is accomplished using the vm-import-controller addon.

To use the VM Import feature, users need to enable the vm-import-controller addon.
To use the VM Import feature, users need to enable the vm-import-controller
addon.

![](/img/v1.2/vm-import-controller/EnableAddon.png)

By default, vm-import-controller leverages ephemeral storage, which is mounted from /var/lib/kubelet.
By default, vm-import-controller leverages ephemeral storage, which is mounted
from /var/lib/kubelet.

During the migration, a large VM's node could run out of space on this mount, resulting in subsequent scheduling failures.
During the migration, a large VM's node could run out of space on this mount,
resulting in subsequent scheduling failures.

To avoid this, users are advised to enable PVC-backed storage and customize the amount of storage needed. According to the best practice, the PVC size should be twice the size of the largest VM being migrated. This is essential as the PVC is used as scratch space to download the VM, and convert the disks into raw image files.
To avoid this, users are advised to enable PVC-backed storage and customize the
amount of storage needed. According to the best practice, the PVC size should be
twice the size of the largest VM being migrated. This is essential as the PVC is
used as scratch space to download the VM, and convert the disks into raw image
files.

![](/img/v1.2/vm-import-controller/ConfigureAddon.png)

## vm-import-controller

Currently, the following source providers are supported:

* VMWare
* OpenStack

## API

The vm-import-controller introduces two CRDs.

### Sources

Sources allow users to define valid source clusters.

For example:

<Tabs>
<TabItem value="vmware" label="VMWare" default>

```yaml
apiVersion: migration.harvesterhci.io/v1beta1
kind: VmwareSource
Expand All @@ -59,24 +73,27 @@ The secret contains the credentials for the vCenter endpoint:
```yaml
apiVersion: v1
kind: Secret
metadata:
metadata:
name: vsphere-credentials
namespace: default
stringData:
"username": "user"
"password": "password"
```

As part of the reconciliation process, the controller will log into vCenter and verify whether the `dc` specified in the source spec is valid.
As part of the reconciliation process, the controller will log into vCenter and
verify whether the `dc` specified in the source spec is valid.

Once this check is passed, the source is marked as ready and can be used for VM migrations.
Once this check is passed, the source is marked as ready and can be used for VM
migrations.

```shell
$ kubectl get vmwaresource.migration
$ kubectl get vmwaresource.migration
NAME STATUS
vcsim clusterReady
```

</TabItem>
<TabItem value="openstack" label="OpenStack">
For OpenStack-based source clusters, an example definition is as follows:

```yaml
Expand All @@ -98,7 +115,7 @@ The secret contains the credentials for the OpenStack endpoint:
```yaml
apiVersion: v1
kind: Secret
metadata:
metadata:
name: devstack-credentials
namespace: default
stringData:
Expand All @@ -109,79 +126,100 @@ stringData:
"ca_cert": "pem-encoded-ca-cert"
```

The OpenStack source reconciliation process attempts to list VMs in the project and marks the source as ready.
The OpenStack source reconciliation process attempts to list VMs in the project
and marks the source as ready.

```shell
$ kubectl get opestacksource.migration
NAME STATUS
devstack clusterReady
```
</TabItem>
</Tabs>

### VirtualMachineImport
The VirtualMachineImport CRD provides a way for users to define a source VM and map to the actual source cluster to perform VM export/import.

The VirtualMachineImport CRD provides a way for users to define a source VM and
map to the actual source cluster to perform VM export/import.

A sample VirtualMachineImport looks like this:

<Tabs>
<TabItem value="vmware" label="VMWare" default>

```yaml
apiVersion: migration.harvesterhci.io/v1beta1
kind: VirtualMachineImport
metadata:
name: alpine-export-test
namespace: default
spec:
spec:
virtualMachineName: "alpine-export-test"
folder: "/vm-foler"
networkMapping:
- sourceNetwork: "dvSwitch 1"
destinationNetwork: "default/vlan1"
- sourceNetwork: "dvSwitch 2"
destinationNetwork: "default/vlan2"
sourceCluster:
- sourceNetwork: "dvSwitch 1"
destinationNetwork: "default/vlan1"
- sourceNetwork: "dvSwitch 2"
destinationNetwork: "default/vlan2"
sourceCluster:
name: vcsim
namespace: default
kind: VmwareSource
apiVersion: migration.harvesterhci.io/v1beta1
storageClass: "harvester-longhorn"
```

This will trigger the controller to export the VM named "alpine-export-test" on the VMWare source cluster to be exported, processed and recreated into the harvester cluster

This can take a while based on the size of the virtual machine, but users should see `VirtualMachineImages` created for each disk in the defined virtual machine.

The list of items in `networkMapping` will define how the source network interfaces are mapped to the Harvester Networks.

If a match is not found, each unmatched network interface is attached to the default `managementNetwork`.

Once the virtual machine has been imported successfully, the object will reflect the status:

```shell
$ kubectl get virtualmachineimport.migration
NAME STATUS
alpine-export-test virtualMachineRunning
openstack-cirros-test virtualMachineRunning

```

Similarly, users can define a VirtualMachineImport for an OpenStack source as well:
This will trigger the controller to export the VM named "alpine-export-test"
from the folder "/vm-folder" on the VMWare source cluster to be exported,
processed and recreated into the harvester cluster
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
This will trigger the controller to export the VM named "alpine-export-test"
from the folder "/vm-folder" on the VMWare source cluster to be exported,
processed and recreated into the harvester cluster
This CRD prompts the controller to export the virtual machine named "alpine-export-test" from the folder `/vm-folder`, which is on the source VMWare cluster that is to be exported, processed, and recreated into the Harvester cluster.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This sentence is still awkward: This CRD prompts the controller to export the virtual machine .... to be exported, ...
I'll reformulate it so it doesn't have export in it twice.

</TabItem>
<TabItem value="openstack" label="OpenStack">

```yaml
apiVersion: migration.harvesterhci.io/v1beta1
kind: VirtualMachineImport
metadata:
name: openstack-demo
namespace: default
spec:
spec:
virtualMachineName: "openstack-demo" #Name or UUID for instance
folder: "/vm-folder"
networkMapping:
- sourceNetwork: "shared"
destinationNetwork: "default/vlan1"
- sourceNetwork: "public"
destinationNetwork: "default/vlan2"
sourceCluster:
- sourceNetwork: "shared"
destinationNetwork: "default/vlan1"
- sourceNetwork: "public"
destinationNetwork: "default/vlan2"
sourceCluster:
name: devstack
namespace: default
kind: OpenstackSource
apiVersion: migration.harvesterhci.io/v1beta1
storageClass: "harvester-longhorn"
```

:::note
OpenStack allows users to have multiple instances with the same name. In such a scenario, users are advised to use the Instance ID. The reconciliation logic tries to perform a name-to-ID lookup when a name is used.
:::
:::note
OpenStack allows users to have multiple instances with the same name. In such a
scenario, users are advised to use the Instance ID. The reconciliation logic
tries to perform a name-to-ID lookup when a name is used.
:::
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
:::note
OpenStack allows users to have multiple instances with the same name. In such a
scenario, users are advised to use the Instance ID. The reconciliation logic
tries to perform a name-to-ID lookup when a name is used.
:::
:::note
OpenStack allows the creation of multiple instances with the same host name. In this scenario, you are advised to use the instance ID. The reconciliation logic attempts to perform a name-to-ID lookup whenever a name is used.
:::

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you want to specify that the host name is what can be identical across instances? Asking because I found this doc topic and thought that we could be a bit more specific here. If I misunderstood, please dismiss this comment.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This entire sentence is weird. The OpenStack docs you linked don't even mention an 'instance ID', but to me it sounds like the original sentence refers to the instance name, not the host name. The OpenStack docs are pretty ambiguous as well because they throw a 'server name' into the mix.
I think the comment from the code example gives a good hint to clear things up:
None of this refers to the host name, as that's a property that can change dynamically and will not necessarily be reflected in the OpenStack control plane.
'Name' refers to the 'instance name' from the OpenStack docs.
The 'instance ID' refers to the UUID, which isn't mentioned in the linked OpenStack docs. But I know that OpenStack uses a UUID to uniquely identify VMs on the command line.
@irishgordo Didn't you do some testing with OpenStack the other day? Can you help clear this up?

</TabItem>
</Tabs>

This can take a while based on the size of the virtual machine, but users should
see `VirtualMachineImages` created for each disk in the defined virtual machine.

The list of items in `networkMapping` will define how the source network
interfaces are mapped to the Harvester Networks.

If a match is not found, each unmatched network interface is attached to the
default `managementNetwork`.

Once the virtual machine has been imported successfully, the object will reflect
the status:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
This can take a while based on the size of the virtual machine, but users should
see `VirtualMachineImages` created for each disk in the defined virtual machine.
The list of items in `networkMapping` will define how the source network
interfaces are mapped to the Harvester Networks.
If a match is not found, each unmatched network interface is attached to the
default `managementNetwork`.
Once the virtual machine has been imported successfully, the object will reflect
the status:
This process can take a while depending on the virtual machine size, but you should see `VirtualMachineImages` created for each disk in the defined virtual machine.
The entries listed under `networkMapping` determine how the source network interfaces are mapped to the Harvester networks. If no matches are found, each unmatched network interface is attached to the default `managementNetwork`.
Once the virtual machine is imported successfully, the status of the object changes to `virtualMachineRunning`.
Example:


```shell
$ kubectl get virtualmachineimport.migration
NAME STATUS
alpine-export-test virtualMachineRunning
openstack-cirros-test virtualMachineRunning
```
Loading