Skip to content

Commit

Permalink
Supported migration network for MTV
Browse files Browse the repository at this point in the history
  • Loading branch information
RichardHoch committed Oct 29, 2024
1 parent 37b5c97 commit 12535fe
Show file tree
Hide file tree
Showing 6 changed files with 72 additions and 69 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -192,8 +192,8 @@ You can create a migration plan by using the {ocp} web console to specify a sour

For your convenience, there are two procedures to create migration plans, starting with either a source provider or with specific VMs:

* To start with a source provider, see xref:creating-migration-plan-2-6-3_provider[Creating a migration plan starting with a source provider].
* To start with specific VMs, see xref:creating-migration-plan-2-6-3_vms[Creating a migration plan starting with specific VMs].
* To start with a source provider, see xref:creating-migration-plan_provider[Creating a migration plan starting with a source provider].
* To start with specific VMs, see xref:creating-migration-plan_vms[Creating a migration plan starting with specific VMs].

[WARNING]
====
Expand All @@ -208,12 +208,12 @@ include::modules/snip_plan-limits.adoc[]
:context: provider
:provider:

include::modules/creating-migration-plan-2-6-3.adoc[leveloffset=+3]
include::modules/creating-migration-plan.adoc[leveloffset=+3]

:provider!:
:context: vms
:vms:
include::modules/creating-migration-plan-2-6-3.adoc[leveloffset=+3]
include::modules/creating-migration-plan.adoc[leveloffset=+3]

:vms!:
:context: mtv
Expand Down Expand Up @@ -352,8 +352,6 @@ include::modules/collected-logs-cr-info.adoc[leveloffset=+3]
include::modules/accessing-logs-ui.adoc[leveloffset=+3]
include::modules/accessing-logs-cli.adoc[leveloffset=+3]

// comment to create PR

== Additional information

include::modules/mtv-performance-addendum.adoc[leveloffset=+2]
Expand Down
6 changes: 6 additions & 0 deletions documentation/modules/creating-migration-plan-2-6-3.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -56,6 +56,12 @@ To specify a different root device, in the *Settings* section, click the Edit ic
+
If the conversion fails because the boot device provided is incorrect, it is possible to get the correct information by looking at the conversion pod logs.
* *{ocp-short} transfer network*: You can configure an {ocp-short} network in the ocp-short} web console by clicking *Networking > NetworkAttachmentDefinitions*.
+
To learn more about the different types of networks {ocp-short} supports, see https://docs.openshift.com/container-platform/4.16/networking/multiple_networks/understanding-multiple-networks.html#additional-networks-provided [Additional Networks In OpenShift Container Program].
+
If you want to change the MTU of the {ocp-short} transfer network, you must also change the MTU of the VMware migration network. For more information see xref:selecting-migration-network-for-vmware-source-provider_vmware[Selecting a migration network for a VMware source provider].
. {rhv-short} source providers only (Optional):

* *Preserving the CPU model of VMs that are migrated from {rhv-short}*: Generally, the CPU model (type) for {rhv-short} VMs is set at the cluster level, but it can be set at the VM level, which is called a custom CPU model.
Expand Down
114 changes: 51 additions & 63 deletions documentation/modules/creating-migration-plan.adoc
Original file line number Diff line number Diff line change
@@ -1,86 +1,74 @@
// Module included in the following assemblies:
//
// * documentation/doc-Migration_Toolkit_for_Virtualization/master.adoc

:_content-type: PROCEDURE
:_mod-docs-content-type: PROCEDURE
[id="creating-migration-plan_{context}"]
= Creating a migration plan
ifdef::provider[]
= Creating a migration plan starting with a source provider

You can create a migration plan by using the {ocp} web console.
You can create a migration plan based on a source provider, starting on the *Plans for virtualization* page. Note the specific options for migrations from VMware or {rhv-short} providers.

A migration plan allows you to group virtual machines to be migrated together or with the same migration parameters, for example, a percentage of the members of a cluster or a complete application.
.Procedure

You can configure a hook to run an Ansible playbook or custom container image during a specified stage of the migration plan.
. In the {ocp} web console, click *Plans for virtualization* and then click *Create Plan*.
+
The *Create migration plan* wizard opens to the *Select source provider* interface.
. Select the source provider of the VMs you want to migrate.
+
The *Select virtual machines* interface opens.
. Select the VMs you want to migrate and click *Next*.
endif::[]

.Prerequisites
ifdef::vms[]
= Creating a migration plan starting with specific VMs

* If {project-short} is not installed on the target cluster, you must add a target provider on the *Providers* page of the web console.
You can create a migration plan based on specific VMs, starting on the *Providers for virtualization* page. Note the specific options for migrations from VMware or {rhv-short} providers.

.Procedure

. In the {ocp} web console, click *Migration* -> *Plans for virtualization*.
. Click *Create plan*.
. In the {ocp} web console, click *Providers for virtualization*.
. In the row of the appropriate source provider, click *VMs*.
+
The *Virtual Machines* tab opens.
. Select the VMs you want to migrate and click *Create migration plan*.
endif::[]
+
The *Create migration plan* pane opens. It displays the source provider's name and suggestions for a target provider and namespace, a network map, and a storage map.
. Enter the *Plan name*.
. Make any needed changes to the editable items.
. Click *Add mapping* to edit a suggested network mapping or a storage mapping, or to add one or more additional mappings.
. Click *Create migration plan*.
+
{project-short} validates the migration plan and the *Plan details* page opens, indicating whether the plan is ready for use or contains an error. The details of the plan are listed, and you can edit the items you filled in on the previous page. If you make any changes, {project-short} validates the plan again.

. Specify the following fields:
. VMware source providers only (All optional):

* *Plan name*: Enter a migration plan name to display in the migration plan list.
* *Plan description*: Optional: Brief description of the migration plan.
* *Source provider*: Select a source provider.
* *Target provider*: Select a target provider.
* *Target namespace*: Do one of the following:
* *Preserving static IPs of VMs*: By default, virtual network interface controllers (vNICs) change during the migration process. This results in vNICs that are set with a static IP in vSphere losing their IP. To avoid this by preserving static IPs of VMs, click the Edit icon next to *Preserve static IPs* and toggle the *Whether to preserve the static IPs* switch in the window that opens. Then click *Save*.
+
{project-short} then issues a warning message about any VMs with a Windows operating system for which vNIC properties are missing. To retrieve any missing vNIC properties, run those VMs in vSphere in order for the vNIC properties to be reported to {project-short}.
** Select a target namespace from the list
** Create a target namespace by typing its name in the text box, and then clicking *create "<the_name_you_entered>"*
* *Entering a list of decryption passphrases for disks encrypted using Linux Unified Key Setup (LUKS)*: To enter a list of decryption passphrases for LUKS-encrypted devices, in the *Settings* section, click the Edit icon next to *Disk decryption passphrases*, enter the passphrases, and then click *Save*. You do not need to enter the passphrases in a specific order - for each LUKS-encrypted device, {project-short} tries each passphrase until one unlocks the device.
* You can change the migration transfer network for this plan by clicking *Select a different network*, selecting a network from the list, and then clicking *Select*.
* *Specifying a root device*: Applies to multi-boot VM migrations only. By default, {project-short} uses the first bootable device detected as the root device.
+
If you defined a migration transfer network for the {virt} provider and if the network is in the target namespace, the network that you defined is the default network for all migration plans. Otherwise, the `pod` network is used.
. Click *Next*.
. Select options to filter the list of source VMs and click *Next*.
. Select the VMs to migrate and then click *Next*.
. Select an existing network mapping or create a new network mapping.
. . Optional: Click *Add* to add an additional network mapping.
To specify a different root device, in the *Settings* section, click the Edit icon next to *Root device* and choose a device from the list of commonly-used options, or enter a device in the text box.
+
To create a new network mapping:

* Select a target network for each source network.
* Optional: Select *Save current mapping as a template* and enter a name for the network mapping.
. Click *Next*.
. Select an existing storage mapping, which you can modify, or create a new storage mapping.
{project-short} uses the following format for disk location: `/dev/sd<disk_identifier><disk_partition>`. For example, if the second disk is the root device and the operating system is on the disk's second partition, the format would be: `/dev/sdb2`. After you enter the boot device, click *Save*.
+
To create a new storage mapping:
.. If your source provider is VMware, select a *Source datastore* and a *Target storage class*.
.. If your source provider is {rhv-full}, select a *Source storage domain* and a *Target storage class*.
.. If your source provider is {osp}, select a *Source volume type* and a *Target storage class*.
If the conversation fails because the boot device is incorrect, review the conversation pod logs for the correct information.
. Optional: Select *Save current mapping as a template* and enter a name for the storage mapping.
. Click *Next*.
. Select a migration type and click *Next*.
* Cold migration: The source VMs are stopped while the data is copied.
* Warm migration: The source VMs run while the data is copied incrementally. Later, you will run the cutover, which stops the VMs and copies the remaining VM data and metadata.
* *{ocp-short} transfer network*: You can configure an {ocp-short} network in the ocp-short} web console by clicking *Networking > NetworkAttachmentDefinitions*.
+
[NOTE]
====
Warm migration is supported only from vSphere and {rhv-full}.
====
. Click *Next*.
. Optional: You can create a migration hook to run an Ansible playbook before or after migration:
.. Click *Add hook*.
.. Select the *Step when the hook will be run*: pre-migration or post-migration.
.. Select a *Hook definition*:
* *Ansible playbook*: Browse to the Ansible playbook or paste it into the field.
* *Custom container image*: If you do not want to use the default `hook-runner` image, enter the image path: `<registry_path>/<image_name>:<tag>`.
To learn more about the different types of networks {ocp-short} supports, see https://docs.openshift.com/container-platform/4.16/networking/multiple_networks/understanding-multiple-networks.html#additional-networks-provided [Additional Networks In OpenShift Container Program].
+
[NOTE]
====
The registry must be accessible to your {ocp} cluster.
====
If you want to adjust the maximum transmission unit (MTU) of the {ocp-short} transfer network, you must also change the MTU of the VMware migration network. For more information see xref:selecting-migration-network-for-vmware-source-provider_vmware[Selecting a migration network for a VMware source provider].
. Click *Next*.
. Review your migration plan and click *Finish*.
+
The migration plan is saved on the *Plans* page.
. {rhv-short} source providers only (Optional):

* *Preserving the CPU model of VMs that are migrated from {rhv-short}*: Generally, the CPU model (type) for {rhv-short} VMs is set at the cluster level, but it can be set at the VM level, which is called a custom CPU model.
By default, {project-short} sets the CPU model on the destination cluster as follows: {project-short} preserves custom CPU settings for VMs that have them, but, for VMs without custom CPU settings, {project-short} does not set the CPU model. Instead, the CPU model is later set by {virt}.
+
You can click the {kebab} of the migration plan and select *View details* to verify the migration plan details.
To preserve the cluster-level CPU model of your {rhv-short} VMs, in the *Settings* section, click the Edit icon next to *Preserve CPU model*. Toggle the *Whether to preserve the CPU model* switch, and then click *Save*.
. If the plan is valid,
.. You can run the plan now by clicking *Start migration*.
.. You can run the plan later by selecting it on the *Plans for virtualization* page and following the procedure in xref:running-migration-plan_mtv[Running a migration plan].
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,10 @@ If you do not select a migration network, the default migration network is the `
You can override the default migration network of the provider by selecting a different network when you create a migration plan.
====

include::snip-mtu-value.adoc[]

// Is this still accurate?

.Procedure

. In the {ocp} web console, click *Migration* -> *Providers for virtualization*.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,7 @@ You can select a migration network in the {ocp} web console for a source provide
Using the default network for migration can result in poor performance because the network might not have sufficient bandwidth. This situation can have a negative effect on the source platform because the disk transfer operation might saturate the network.

include::snip_vmware_esxi_nfc.adoc[]
include::snip-mtu-value.adoc[]

.Prerequisites

Expand Down
6 changes: 6 additions & 0 deletions documentation/modules/snip-mtu-value.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
:_content-type: SNIPPET

[NOTE]
====
If you input any value of maximum transmission unit (MTU) besides the default value in your migration network, you must also input the same value in the {ocp-short} transfer network that you use For more information about the {ocp-short} transfer network, see xref:creating-migration-plan_provider[Creating a migration plan starting with a source provider] or xref:creating-migration-plan_vms[Creating a migration plan starting with specific VMs].
====

0 comments on commit 12535fe

Please sign in to comment.