diff --git a/documentation/doc-Migrating_your_virtual_machines/assemblies/assembly_migrating-from-cnv.adoc b/documentation/doc-Migrating_your_virtual_machines/assemblies/assembly_migrating-from-cnv.adoc index 6fb34a79de1..72e700e6e75 100644 --- a/documentation/doc-Migrating_your_virtual_machines/assemblies/assembly_migrating-from-cnv.adoc +++ b/documentation/doc-Migrating_your_virtual_machines/assemblies/assembly_migrating-from-cnv.adoc @@ -9,9 +9,7 @@ ifdef::context[:parent-context: {context}] [role="_abstract"] Run your {virt} migration plan from the MTV UI or from the command-line. -== Prerequisites - -* You have planned your migration from {virt}. +include::../modules/con_prerequisites-migrating-cnv.adoc[leveloffset=+1] :context: cnv :cnv: diff --git a/documentation/doc-Migrating_your_virtual_machines/assemblies/assembly_migrating-from-osp.adoc b/documentation/doc-Migrating_your_virtual_machines/assemblies/assembly_migrating-from-osp.adoc index 256755f5d88..97f13641494 100644 --- a/documentation/doc-Migrating_your_virtual_machines/assemblies/assembly_migrating-from-osp.adoc +++ b/documentation/doc-Migrating_your_virtual_machines/assemblies/assembly_migrating-from-osp.adoc @@ -11,9 +11,7 @@ ifdef::context[:parent-context: {context}] [role="_abstract"] Run your {osp} migration plan from the MTV UI or from the command-line. -== Prerequisites - -* You have planned your migration from {osp}. +include::../modules/con_prerequisites-migrating-osp.adoc[leveloffset=+1] :context: ostack :ostack: diff --git a/documentation/doc-Migrating_your_virtual_machines/assemblies/assembly_migrating-from-ova.adoc b/documentation/doc-Migrating_your_virtual_machines/assemblies/assembly_migrating-from-ova.adoc index 18b10fb189d..9354b926b1b 100644 --- a/documentation/doc-Migrating_your_virtual_machines/assemblies/assembly_migrating-from-ova.adoc +++ b/documentation/doc-Migrating_your_virtual_machines/assemblies/assembly_migrating-from-ova.adoc @@ -13,9 +13,7 @@ Run your OVA migration plan from the MTV UI or from the command-line. include::../modules/con_ova-scope-and-limitations.adoc[leveloffset=+1] -== Prerequisites - -* You have planned your migration from OVA. +include::../modules/con_prerequisites-migrating-ova.adoc[leveloffset=+1] :context: ova :ova: diff --git a/documentation/doc-Migrating_your_virtual_machines/assemblies/assembly_migrating-from-rhv.adoc b/documentation/doc-Migrating_your_virtual_machines/assemblies/assembly_migrating-from-rhv.adoc index 692dbdf4983..330046bc83f 100644 --- a/documentation/doc-Migrating_your_virtual_machines/assemblies/assembly_migrating-from-rhv.adoc +++ b/documentation/doc-Migrating_your_virtual_machines/assemblies/assembly_migrating-from-rhv.adoc @@ -11,9 +11,7 @@ ifdef::context[:parent-context: {context}] [role="_abstract"] Run your {rhv-full} migration plan from the MTV UI or from the command-line. -== Prerequisites - -* You have planned your migration from {rhv-full}. +include::../modules/con_prerequisites-migrating-rhv.adoc[leveloffset=+1] :context: rhv :rhv: diff --git a/documentation/doc-Migrating_your_virtual_machines/assemblies/assembly_migrating-from-vmware.adoc b/documentation/doc-Migrating_your_virtual_machines/assemblies/assembly_migrating-from-vmware.adoc index 658a73e91a4..c92e27444cc 100644 --- a/documentation/doc-Migrating_your_virtual_machines/assemblies/assembly_migrating-from-vmware.adoc +++ b/documentation/doc-Migrating_your_virtual_machines/assemblies/assembly_migrating-from-vmware.adoc @@ -12,9 +12,7 @@ ifdef::context[:parent-context: {context}] [role="_abstract"] Run your VMware migration plan from the MTV UI or from the command-line. -== Prerequisites - -* You have planned your migration from VMware vSphere. +include::../modules/con_prerequisites-migrating-vmware.adoc[leveloffset=+1] :context: vmware :vmware: diff --git a/documentation/modules/about-configuring-target-vm-scheduling.adoc b/documentation/modules/about-configuring-target-vm-scheduling.adoc index 53a7e0d5a55..0d067673257 100644 --- a/documentation/modules/about-configuring-target-vm-scheduling.adoc +++ b/documentation/modules/about-configuring-target-vm-scheduling.adoc @@ -15,6 +15,6 @@ Previously, when you migrated VMs to {virt}, {virt} automatically determined the Target VM scheduling is designed to help you with the following use cases, among others: -* *Business continuity and disaster recovery*: You can use scheduling rules to migrate and power up critical VMs to several sites, in different time zones or otherwise geographically separated by significant distances. This allows you to deploy these VMs as strategic assets for business continuity, such as disaster recovery. +* *Business continuity and disaster recovery*: You can use scheduling rules to migrate and power on critical VMs to several sites, in different time zones or otherwise geographically separated by significant distances. This allows you to deploy these VMs as strategic assets for business continuity, such as disaster recovery. * *Working with fluctuating demands*: In situations where demand for a service might vary significantly, rules for scheduling when to spin up VMs based upon demand allows you to use your resources more efficiently. diff --git a/documentation/modules/about-storage-copy-offload.adoc b/documentation/modules/about-storage-copy-offload.adoc index 3db48d76fa1..eb094086cbd 100644 --- a/documentation/modules/about-storage-copy-offload.adoc +++ b/documentation/modules/about-storage-copy-offload.adoc @@ -15,7 +15,7 @@ You can migrate {vmw} virtual machines (VMs) that are in a storage array network You enable storage copy offload by configuring the storage map in your migration plan to point to your storage array instead of the network you usually use for migration. When you start the migration plan, {project-short} migrates your VMs by copying them to the storage array you choose and using `XCOPY` to copy them directly to {virt}, instead of transmitting the contents of your VMs to {virt}. -The storage copy offload feature has some unique configuration prerequisites, which are discussed in link:https://docs.redhat.com/en/documentation/migration_toolkit_for_virtualization/2.10/html/planning_your_migration_to_red_hat_openshift_virtualization/assembly_planning-migration-vmware_mtv#proc_storage-copy-offload-steps[Planning and running storage copy offload migrations]. Once you configure your system, you can migrate plans using storage copy offload by using either the {project-short} UI or its CLI. Instructions for using storage offload have been integrated into the procedures for migrating {vmw} VMs for both the UI and CLI. +The storage copy offload feature has some unique configuration prerequisites, which are discussed in link:https://docs.redhat.com/en/documentation/migration_toolkit_for_virtualization/2.10/html/planning_your_migration_to_red_hat_openshift_virtualization/assembly_planning-migration-vmware_mtv#proc_storage-copy-offload-steps[Planning and running storage copy offload migrations]. Once you configure your system, you can migrate plans by using storage copy offload with either the {project-short} UI or its CLI. Instructions for using storage offload have been integrated into the procedures for migrating {vmw} VMs for both the UI and CLI. You must ensure that your migration plans do not mix VDDK mappings with copy-offload mappings. Because the migration controller copies disks either through CDI volumes (VDDK) or through Volume Populators (copy-offload), all storage pairs in the plan must either include copy-offload details (a `Secret` + product) or none of them must. Otherwise, the plan fails. diff --git a/documentation/modules/adding-hook-using-ui.adoc b/documentation/modules/adding-hook-using-ui.adoc index 1c067c25f0d..1d71d6a310d 100644 --- a/documentation/modules/adding-hook-using-ui.adoc +++ b/documentation/modules/adding-hook-using-ui.adoc @@ -29,14 +29,14 @@ You can run one pre-migration hook, one post-migration hook, or one of each per .. In the *Pre migration hook* section, toggle the *Enable hook* switch to *Enable pre migration hook*. .. Enter the *Hook runner image*. If you are specifying the `spec.playbook`, you need to use an image that has an `ansible-runner`. .. Optional: Enter the *Service account* name. The service account must have the necessary RBAC permissions to manage cluster resources and at least write access for the `openshift-mtv` namespace where hooks execute. -.. Paste your hook as a YAML file in the *Ansible playbook* text box. +.. Paste your hook as a YAML file in the *Ansible Playbook* text box. . For a post-migration hook, perform the following steps: .. In the *Post migration hook*, toggle the *Enable hook* switch to *Enable post migration hook*. .. Enter the *Hook runner image*. If you are specifying the `spec.playbook`, you need to use an image that has an `ansible-runner`. .. Optional: Enter the *Service account* name. The service account must have the necessary RBAC permissions to manage cluster resources and at least write access for the `openshift-mtv` namespace where hooks execute. -.. Paste your hook as a YAML file in the *Ansible playbook* text box. +.. Paste your hook as a YAML file in the *Ansible Playbook* text box. . At the top of the tab, click *Update hooks*. + diff --git a/documentation/modules/compatibility-guidelines.adoc b/documentation/modules/compatibility-guidelines.adoc index 84e843a7669..a4659197507 100644 --- a/documentation/modules/compatibility-guidelines.adoc +++ b/documentation/modules/compatibility-guidelines.adoc @@ -27,7 +27,7 @@ Generally it is advised to upgrade {rhv-full} Manager to the previously mentione Therefore, it is recommended to upgrade {rhv-short} to the supported version above before the migration to {virt}. -However, migrations from {rhv-short} 4.3.11 were tested with {project-short} 2.3, and might work in practice in many environments using {project-short} {project-version}. In this case, it is recommended to upgrade {rhv-full} Manager to the previously mentioned supported version before the migration to {virt}. +However, migrations from {rhv-short} 4.3.11 were tested with {project-short} 2.3, and might work in practice in many environments by using {project-short} {project-version}. In this case, it is recommended to upgrade {rhv-full} Manager to the previously mentioned supported version before the migration to {virt}. ==== [id="openshift-operator-life-cycles"] diff --git a/documentation/modules/con_about-configuring-importer-pods.adoc b/documentation/modules/con_about-configuring-importer-pods.adoc index a8a5fb72cc8..d59ac8c0298 100644 --- a/documentation/modules/con_about-configuring-importer-pods.adoc +++ b/documentation/modules/con_about-configuring-importer-pods.adoc @@ -8,7 +8,7 @@ = About scheduling importer pods [role="_abstract"] -{project-full} uses `virt-v2v` convertor pods, or _importer pods_, to transfer data from VMware source virtual machines (VMs) to target VMs. +{project-full} uses `virt-v2v` converter pods, or _importer pods_, to transfer data from VMware source virtual machines (VMs) to target VMs. By default, {virt} assigns the nodes to which these importer pods transfer data. However, for cold migrations from VMware VMs, you can schedule the destination nodes for the importer pods. diff --git a/documentation/modules/con_common-migration-issues.adoc b/documentation/modules/con_common-migration-issues.adoc index 59814cfa458..add4e17f4ce 100644 --- a/documentation/modules/con_common-migration-issues.adoc +++ b/documentation/modules/con_common-migration-issues.adoc @@ -20,7 +20,7 @@ Verify that you have created a network mapping that correctly links the source n To resolve this issue: . Create the required network attachment definition in {virt}. -. Update or recreate your network mapping to reference the correct destination network. +. Update or re-create your network mapping to reference the correct destination network. . Validate that the network mapping shows a `Ready` status before starting the migration. *Why does warm migration fail with a snapshot error?* @@ -50,7 +50,7 @@ Invalid VM names include those that: * Use uppercase letters * Use a name that differs from the VM's files or folder name on the datastore -{project-short} automatically adjusts non-compliant VM names in the target cluster by replacing invalid characters. Alternatively, you can rename target VMs in the {project-short} UI during migration plan creation. +{project-short} automatically adjusts noncompliant VM names in the target cluster by replacing invalid characters. Alternatively, you can rename target VMs in the {project-short} UI during migration plan creation. For {vmw} environments, you can use Storage vMotion to rename the VM before migration. This migration process automatically renames the VM's files and folder on the datastore to match the new name you have given it in the vSphere Client. Alternatively, you can manually remove the VM from inventory, rename the files and folders, edit the `.vmx` file to update the references, and then re-add the VM to the inventory. diff --git a/documentation/modules/con_prerequisites-migrating-cnv.adoc b/documentation/modules/con_prerequisites-migrating-cnv.adoc new file mode 100644 index 00000000000..faef3e0d07c --- /dev/null +++ b/documentation/modules/con_prerequisites-migrating-cnv.adoc @@ -0,0 +1,12 @@ +// Module included in the following assemblies: +// +// * documentation/doc-Migrating_your_virtual_machines/assemblies/assembly_migrating-from-cnv.adoc + +:_mod-docs-content-type: CONCEPT +[id="con_prerequisites-migrating-cnv_{context}"] += Prerequisites + +[role="_abstract"] +Ensure that you have completed the planning steps before running your {virt} migration. + +* You have planned your migration from {virt}. diff --git a/documentation/modules/con_prerequisites-migrating-osp.adoc b/documentation/modules/con_prerequisites-migrating-osp.adoc new file mode 100644 index 00000000000..a3327f61e5f --- /dev/null +++ b/documentation/modules/con_prerequisites-migrating-osp.adoc @@ -0,0 +1,12 @@ +// Module included in the following assemblies: +// +// * documentation/doc-Migrating_your_virtual_machines/assemblies/assembly_migrating-from-osp.adoc + +:_mod-docs-content-type: CONCEPT +[id="con_prerequisites-migrating-osp_{context}"] += Prerequisites + +[role="_abstract"] +Ensure that you have completed the planning steps before running your {osp} migration. + +* You have planned your migration from {osp}. diff --git a/documentation/modules/con_prerequisites-migrating-ova.adoc b/documentation/modules/con_prerequisites-migrating-ova.adoc new file mode 100644 index 00000000000..1103cdf19c9 --- /dev/null +++ b/documentation/modules/con_prerequisites-migrating-ova.adoc @@ -0,0 +1,12 @@ +// Module included in the following assemblies: +// +// * documentation/doc-Migrating_your_virtual_machines/assemblies/assembly_migrating-from-ova.adoc + +:_mod-docs-content-type: CONCEPT +[id="con_prerequisites-migrating-ova_{context}"] += Prerequisites + +[role="_abstract"] +Ensure that you have completed the planning steps before running your OVA migration. + +* You have planned your migration from OVA. diff --git a/documentation/modules/con_prerequisites-migrating-rhv.adoc b/documentation/modules/con_prerequisites-migrating-rhv.adoc new file mode 100644 index 00000000000..010d3df13a8 --- /dev/null +++ b/documentation/modules/con_prerequisites-migrating-rhv.adoc @@ -0,0 +1,12 @@ +// Module included in the following assemblies: +// +// * documentation/doc-Migrating_your_virtual_machines/assemblies/assembly_migrating-from-rhv.adoc + +:_mod-docs-content-type: CONCEPT +[id="con_prerequisites-migrating-rhv_{context}"] += Prerequisites + +[role="_abstract"] +Ensure that you have completed the planning steps before running your {rhv-full} migration. + +* You have planned your migration from {rhv-full}. diff --git a/documentation/modules/con_prerequisites-migrating-vmware.adoc b/documentation/modules/con_prerequisites-migrating-vmware.adoc new file mode 100644 index 00000000000..7c2f0f765fc --- /dev/null +++ b/documentation/modules/con_prerequisites-migrating-vmware.adoc @@ -0,0 +1,12 @@ +// Module included in the following assemblies: +// +// * documentation/doc-Migrating_your_virtual_machines/assemblies/assembly_migrating-from-vmware.adoc + +:_mod-docs-content-type: CONCEPT +[id="con_prerequisites-migrating-vmware_{context}"] += Prerequisites + +[role="_abstract"] +Ensure that you have completed the planning steps before running your VMware migration. + +* You have planned your migration from VMware vSphere. diff --git a/documentation/modules/con_troubleshooting-storage-copy-offload.adoc b/documentation/modules/con_troubleshooting-storage-copy-offload.adoc index 14223ca30e8..1af313d61dc 100644 --- a/documentation/modules/con_troubleshooting-storage-copy-offload.adoc +++ b/documentation/modules/con_troubleshooting-storage-copy-offload.adoc @@ -6,6 +6,7 @@ [id="con_troubleshooting-storage-copy-offload_{context}"] = Troubleshooting storage copy offload +[role="_abstract"] This section describes problems that are unique to storage copy offload and how you can resolve them. [id=sco-vsphere-esxi-connectivity_{context}] @@ -15,7 +16,7 @@ Remote ESXi connection fails with a SOAP error:: + *Description*: Sometimes a remote ESXi execution fails, returning a SOAP error with no apparent root cause message. + -*Explanation*: Because vSphere invokes some SOAP/REST endpoints on the ESXi, a connection can fail because of standard error reasons that vanish after the next try. +*Explanation*: Because vSphere invokes some SOAP or REST endpoints on the ESXi, a connection can fail because of standard error reasons that vanish after the next try. + *Solution*: If the populator fails, the migration can be restarted. Try to restart or retry the populator, or restart the migration. @@ -30,7 +31,7 @@ CLI Fault: The object or item referred to could not be found. `, includes the `token`, `userID`, and `projectID` that you need for authentication using a token with user ID. +The output, referred to here as ``, includes the `token`, `userID`, and `projectID` that you need for authentication by using a token with user ID. . Create a `Secret` manifest similar to the following: -** For authentication using a token with user ID: +** For authentication by using a token with user ID: + [source,yaml] ---- @@ -70,7 +70,7 @@ stringData: EOF ---- -** For authentication using a token with user name: +** For authentication by using a token with user name: + [source,yaml] ---- diff --git a/documentation/modules/proc_migrating-vms-cli-vmware.adoc b/documentation/modules/proc_migrating-vms-cli-vmware.adoc index cf09d0305e5..78e8916063e 100644 --- a/documentation/modules/proc_migrating-vms-cli-vmware.adoc +++ b/documentation/modules/proc_migrating-vms-cli-vmware.adoc @@ -60,10 +60,10 @@ where: Is an optional section in which you can specify a provider's `name` and `uid`. ``:: -Specifies the vCenter user or the ESX/ESXi user. +Specifies the vCenter user or the ESX or ESXi user. ``:: -Specifies the password of the vCenter user or the ESX/ESXi user. +Specifies the password of the vCenter user or the ESX or ESXi user. `<"true"/"false">`:: Specifies `"true"` to skip certificate verification, and specifies `"false"` to verify the certificate. Defaults to `"false"` if not specified. Skipping certificate verification proceeds with an insecure migration and then the certificate is not required. Insecure migration means that the transferred data is sent over an insecure connection and potentially sensitive data could be exposed. @@ -72,7 +72,7 @@ Specifies `"true"` to skip certificate verification, and specifies `"false"` to Specifies the CA cert object. When this field is not set and 'skip certificate verification' is disabled, {project-short} attempts to use the system CA. ``:: -Specifies the API endpoint URL of the vCenter or the ESX/ESXi, for example, `https:///sdk`. +Specifies the API endpoint URL of the vCenter or the ESX or ESXi, for example, `https:///sdk`. . Create a `Provider` manifest for the source provider: + @@ -237,7 +237,7 @@ Specifies the `Secret` that contains the storage provider credentials. ``:: Specifies the name of the storage product used in the migration. For example, `vantara` for Hitachi Vantara. Storage copy offload only. Valid strings are listed in the table that follows this CR. + -Storage copy offload is a feature that allows you to migrate {vmw} virtual machines (VMs) that are in a storage array network (SAN) more efficiently. This feature makes use of the command `vmkfstools` on the ESXi host, which invokes the `XCOPY` command on the storage array using an Internet Small Computer Systems Interface (iSCSI) or Fibre Channel (FC) connection. Storage copy offload lets you copy data inside a SAN more efficiently than copying the data over a network. For {project-first} 2.11, storage copy offload is available as GA for cold migration and as a Technology Preview feature for warm migration. For more information, see link:https://docs.redhat.com/en/documentation/migration_toolkit_for_virtualization/2.11/html/planning_your_migration_to_red_hat_openshift_virtualization/assembly_planning-migration-vmware_mtv#about-storage-copy-offload_vmware[Migrating {vmw} virtual machines by using storage copy offload]. +Storage copy offload is a feature that allows you to migrate {vmw} virtual machines (VMs) that are in a storage array network (SAN) more efficiently. This feature makes use of the command `vmkfstools` on the ESXi host, which invokes the `XCOPY` command on the storage array by using an Internet Small Computer Systems Interface (iSCSI) or Fibre Channel (FC) connection. Storage copy offload lets you copy data inside a SAN more efficiently than copying the data over a network. For {project-first} 2.11, storage copy offload is available as GA for cold migration and as a Technology Preview feature for warm migration. For more information, see link:https://docs.redhat.com/en/documentation/migration_toolkit_for_virtualization/2.11/html/planning_your_migration_to_red_hat_openshift_virtualization/assembly_planning-migration-vmware_mtv#about-storage-copy-offload_vmware[Migrating {vmw} virtual machines by using storage copy offload]. ``:: Specifies the {vmw} vSphere datastore moRef. For example, `f2737930-b567-451a-9ceb-2887f6207009`. To retrieve the moRef, see xref:retrieving-vmware-moref_vmware[Retrieving a {vmw} vSphere moRef]. @@ -454,11 +454,11 @@ If you set `pvcNameTemplateUseGenerateName` to `false`, the generated PVC name m ==== `skipGuestConversion`:: -Specifies whether VMs are converted before migration using the `virt-v2v` tool, which makes the VMs compatible with {virt}. -* When set to `false`, the default value, {project-short} migrates VMs using `virt-v2v`. -* When set to `true`, {project-short} migrates VMs using raw copy mode, which copies the VMs without converting them first. +Specifies whether VMs are converted before migration by using the `virt-v2v` tool, which makes the VMs compatible with {virt}. +* When set to `false`, the default value, {project-short} migrates VMs by using `virt-v2v`. +* When set to `true`, {project-short} migrates VMs by using raw copy mode, which copies the VMs without converting them first. + -Raw copy mode copies VMs without converting them with `virt-v2v`. This provides faster conversions for migrating VMs running a wider range of operating systems and supports migrating disks encrypted using Linux Unified Key Setup (LUKS) without needing keys. However, VMs migrated using raw copy mode might not function properly on {virt}. For more information on `virt-v2v`, see xref:virt-v2v-mtv_mtv[How {project-short} uses the virt-v2v tool]. +Raw copy mode copies VMs without converting them with `virt-v2v`. This provides faster conversions for migrating VMs running a wider range of operating systems and supports migrating disks encrypted using Linux Unified Key Setup (LUKS) without needing keys. However, VMs migrated by using raw copy mode might not function properly on {virt}. For more information on `virt-v2v`, see xref:virt-v2v-mtv_mtv[How {project-short} uses the virt-v2v tool]. `targetAffinity`:: Specifies a VM target affinity rule that is entered in the lines following this label. This is an optional label. @@ -472,7 +472,7 @@ Specifies organizational or operational labels to migrated VMs for identificatio Specifies the key-value pairs that must be matched for VMs to be scheduled on nodes. This is an optional label. `convertorLabels`:: -Cold migrations only: Specifies organizational or operational labels for the `virt-v2v` convertor pods (importer pods) for identification and management. This is an optional label. +Cold migrations only: Specifies organizational or operational labels for the `virt-v2v` converter pods (importer pods) for identification and management. This is an optional label. + [IMPORTANT] ==== @@ -481,17 +481,17 @@ To ensure proper system functionality, system-managed labels override any user-d + For more details on labels and selectors in Kubernetes, see https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#labels[Labels and Selectors]. + -`convertorLabels`, `convertorNodeSelector`, and `convertorAffinity` are fields that support scheduling the `virt-v2v` conversion pod (importer pod) for cold migrations from {vmw} providers. With this feature, you can set the `convertorLabels`, `convertorNodeSelector`, and `convertorAffinity` that control the labels, `noteSelector`, and `Affinity` of the convertor pod. +`convertorLabels`, `convertorNodeSelector`, and `convertorAffinity` are fields that support scheduling the `virt-v2v` converter pod (importer pod) for cold migrations from {vmw} providers. With this feature, you can set the `convertorLabels`, `convertorNodeSelector`, and `convertorAffinity` that control the labels, `noteSelector`, and `Affinity` of the converter pod. + For more information on importer files, see {mtv-plan}assembly_planning-migration-vmware#con_about-configuring-importer-pods_vmware[About scheduling importer pods]. `convertorNodeSelector`:: -Cold migrations only: Specifies the key-value pairs that must be matched for data to be transferred by the `virt-v2v` convertor pods (importer pods) to the specified target nodes. This is an optional label. With this feature, you can dedicate specific nodes for disk conversion workloads that require high I/O performance or network access to source VMware infrastructure. +Cold migrations only: Specifies the key-value pairs that must be matched for data to be transferred by the `virt-v2v` converter pods (importer pods) to the specified target nodes. This is an optional label. With this feature, you can dedicate specific nodes for disk conversion workloads that require high I/O performance or network access to source VMware infrastructure. + For more details on node selectors in Kubernetes, see https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector[nodeSelector]. `convertorAffinity`:: -Cold migrations only: Specifies a hard-affinity or a soft-affinity rule for `virt-v2v` convertor pods (importer pods). This is an optional label. Affinity rules can be used to optimize placement for disk conversion performance, such as co-locating with storage or ensuring network proximity to VMware infrastructure for cold migration data transfers. +Cold migrations only: Specifies a hard-affinity or a soft-affinity rule for `virt-v2v` converter pods (importer pods). This is an optional label. Affinity rules can be used to optimize placement for disk conversion performance, such as co-locating with storage or ensuring network proximity to VMware infrastructure for cold migration data transfers. + For more information on affinity rules in Kubernetes, see https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity[Affinity and anti-affinity]. diff --git a/documentation/modules/proc_storage-copy-offload-auto-ssh-set-up.adoc b/documentation/modules/proc_storage-copy-offload-auto-ssh-set-up.adoc index 42832442daa..96051ded842 100644 --- a/documentation/modules/proc_storage-copy-offload-auto-ssh-set-up.adoc +++ b/documentation/modules/proc_storage-copy-offload-auto-ssh-set-up.adoc @@ -77,7 +77,7 @@ $ oc get secrets -l app.kubernetes.io/component=ssh-keys -n openshift-mtv $ oc get secret -o yaml -n openshift-mtv ---- + -. Optional: If needed, you can replace an auto-generated key pair by running the following command: +. Optional: If needed, you can replace an autogenerated key pair by running the following command: + [source,terminal] ---- diff --git a/documentation/modules/proc_storage-copy-offload-general-ssh-set-up.adoc b/documentation/modules/proc_storage-copy-offload-general-ssh-set-up.adoc index ca7bd3c4f64..0a7a6976d8d 100644 --- a/documentation/modules/proc_storage-copy-offload-general-ssh-set-up.adoc +++ b/documentation/modules/proc_storage-copy-offload-general-ssh-set-up.adoc @@ -13,19 +13,15 @@ Although SSH keys are automatically generated when you choose to use the SSH met Procedures for both options are given in the sections that follow. +.Procedure [id="storage-copy-offload-general-ssh-security_{context}"] -== Important notes and security considerations - -- All public keys must include command restrictions for security. -- The command path in the restrictions must match the secure script path: `/vmfs/volumes/{datastore-name}/secure-vmkfstools-wrapper.py`. -- You must install the SSH key in each ESXi host in your migration environment. -- SSH service must be enabled on all target ESXi hosts. -- To support ESXi access control, commands are restricted to `vmkfstools` operations only. - -== Security recommendations - -It is recommended to follow the following security recommendations: - -- Use separate key pairs for different environments. -- Rotate keys periodically. -- Consider using shorter-lived keys for enhanced security. +- *Important notes and security considerations* +** All public keys must include command restrictions for security. +** The command path in the restrictions must match the secure script path: `/vmfs/volumes/{datastore-name}/secure-vmkfstools-wrapper.py`. +** You must install the SSH key in each ESXi host in your migration environment. +** SSH service must be enabled on all target ESXi hosts. +** To support ESXi access control, commands are restricted to `vmkfstools` operations only. +- *Security recommendations* +** Use separate key pairs for different environments. +** Rotate keys periodically. +** Consider using shorter-lived keys for enhanced security. diff --git a/documentation/modules/proc_storage-copy-offload-manual-ssh-set-up.adoc b/documentation/modules/proc_storage-copy-offload-manual-ssh-set-up.adoc index bda6bcb17bc..affda0b54da 100644 --- a/documentation/modules/proc_storage-copy-offload-manual-ssh-set-up.adoc +++ b/documentation/modules/proc_storage-copy-offload-manual-ssh-set-up.adoc @@ -36,7 +36,7 @@ spec: esxiCloneMethod: "ssh" ---- -. Get the public key from the auto-generated secret by performing the following steps: +. Get the public key from the autogenerated secret by performing the following steps: .. Get a list of SSH key secrets by running the following command: + diff --git a/documentation/modules/proc_storage-copy-offload-vib-set-up.adoc b/documentation/modules/proc_storage-copy-offload-vib-set-up.adoc index 8d09d9cc017..3faffb95727 100644 --- a/documentation/modules/proc_storage-copy-offload-vib-set-up.adoc +++ b/documentation/modules/proc_storage-copy-offload-vib-set-up.adoc @@ -7,7 +7,7 @@ = Setting up storage copy offload using the VIB [role="_abstract"] -You can set up storage copy offload using the vSphere Installation on Bundle (VIB). This is the default method for running storage copy offload migrations. +You can set up storage copy offload by using the vSphere Installation on Bundle (VIB). This is the default method for running storage copy offload migrations. [IMPORTANT] ==== diff --git a/documentation/modules/proc_troubleshooting-resize-disk-image.adoc b/documentation/modules/proc_troubleshooting-resize-disk-image.adoc index 9c735a6088d..c014f0eec78 100644 --- a/documentation/modules/proc_troubleshooting-resize-disk-image.adoc +++ b/documentation/modules/proc_troubleshooting-resize-disk-image.adoc @@ -49,6 +49,4 @@ spec: . After migration, verify that the VM can boot and access all disk volumes. [role="_additional-resources"] -.Additional resources - * link:https://docs.openshift.com/container-platform/latest/virt/storage/virt-configuring-cdi-for-filesystem-overhead.html[Configuring CDI for file system overhead] diff --git a/documentation/modules/ref_avoid-network-load.adoc b/documentation/modules/ref_avoid-network-load.adoc index 29ad39bde18..985c5343b90 100644 --- a/documentation/modules/ref_avoid-network-load.adoc +++ b/documentation/modules/ref_avoid-network-load.adoc @@ -11,7 +11,7 @@ You can reduce the network load on {vmw} networks by selecting the migration net By incorporating a virtualization provider, {project-short} enables the selection of a specific network, which is accessible on the ESXi hosts, for the purpose of migrating virtual machines to {ocp-short}. Selecting this migration network from the ESXi host in the {project-short} UI ensures that the transfer is performed using the selected network as an ESXi endpoint. -It is imperative to ensure that the network selected has connectivity to the OCP interface, has adequate bandwidth for migrations, and that the network interface is not saturated. +It is imperative to ensure that the network selected has connectivity to the Red Hat OpenShift Container Platform interface, has adequate bandwidth for migrations, and that the network interface is not saturated. In environments with fast networks, such as 10GbE networks, migration network impacts can be expected to match the rate of ESXi datastore reads. diff --git a/documentation/modules/ref_fast-datastore-read-speeds.adoc b/documentation/modules/ref_fast-datastore-read-speeds.adoc index e2a43c59275..9e4ff8326c0 100644 --- a/documentation/modules/ref_fast-datastore-read-speeds.adoc +++ b/documentation/modules/ref_fast-datastore-read-speeds.adoc @@ -9,6 +9,6 @@ [role="_abstract"] Datastores read rates impact the total transfer times, so it is essential to ensure fast reads are possible from the ESXi datastore to the ESXi host. -Example in numbers: 200 to 300 MiB/s was the average read rate for both vSphere and ESXi endpoints for a single ESXi server. When multiple ESXi servers are used, higher datastore read rates are possible. +Example in numbers: 200 to 300 MiB per second was the average read rate for both vSphere and ESXi endpoints for a single ESXi server. When multiple ESXi servers are used, higher datastore read rates are possible. diff --git a/documentation/modules/ref_fast-storage-network-speeds.adoc b/documentation/modules/ref_fast-storage-network-speeds.adoc index d5796e09eeb..d5133cbef9d 100644 --- a/documentation/modules/ref_fast-storage-network-speeds.adoc +++ b/documentation/modules/ref_fast-storage-network-speeds.adoc @@ -7,14 +7,14 @@ = Ensure fast storage and network speeds [role="_abstract"] -Ensure fast storage and network speeds, both for {vmw} and {ocp} (OCP) environments. +Ensure fast storage and network speeds, both for {vmw} and {ocp} (Red Hat OpenShift Container Platform) environments. * To perform fast migrations, {vmw} must have fast read access to datastores. Networking between {vmw} ESXi hosts should be fast, ensure a 10 GiB network connection, and avoid network bottlenecks. -** Extend the {vmw} network to the OCP Workers Interface network environment. +** Extend the {vmw} network to the Red Hat OpenShift Container Platform Workers Interface network environment. ** It is important to ensure that the {vmw} network offers high throughput (10 Gigabit Ethernet) and rapid networking to guarantee that the reception rates align with the read rate of the ESXi datastore. ** Be aware that the migration process uses significant network bandwidth and that the migration network is utilized. If other services use that network, it might have an impact on those services and their migration rates. -** For example, 200 to 325 MiB/s was the average network transfer rate from the `vmnic` for each ESXi host associated with transferring data to the OCP interface. \ No newline at end of file +** For example, 200 to 325 MiB/s was the average network transfer rate from the `vmnic` for each ESXi host associated with transferring data to the Red Hat OpenShift Container Platform interface. \ No newline at end of file diff --git a/documentation/modules/ref_mtv-operator-parameters.adoc b/documentation/modules/ref_mtv-operator-parameters.adoc index aa4afcceaab..1ac0305a8fb 100644 --- a/documentation/modules/ref_mtv-operator-parameters.adoc +++ b/documentation/modules/ref_mtv-operator-parameters.adoc @@ -39,7 +39,7 @@ a|The maximum number of disks or VMs that can transfer or migrate simultaneously |`10` |`controller_filesystem_overhead` -|Percentage of space in persistent volumes allocated as file system overhead when the `storageclass` is `filesystem`. +|Percentage of space in persistent volumes allocated as file system resource usage when the `storageclass` is `filesystem`. *`ForkliftController` CR only.* |`10` diff --git a/documentation/modules/ref_source-vm-migration-considerations.adoc b/documentation/modules/ref_source-vm-migration-considerations.adoc index af98ad82960..099684507f9 100644 --- a/documentation/modules/ref_source-vm-migration-considerations.adoc +++ b/documentation/modules/ref_source-vm-migration-considerations.adoc @@ -11,10 +11,10 @@ Review these considerations when planning your migration of VMs from a source pr VM naming:: -* *DNS compliance in {virt}:* VM names must be DNS-compliant and unique in the {virt} environment. {project-first} automatically adjusts non-compliant VM names in the target cluster. Alternatively, you can rename target VMs in the {project-short} UI. For information about renaming VMs, see xref:proc_renaming-vms-for-migration_{context}[Renaming virtual machines]. +* *DNS compliance in {virt}:* VM names must be DNS-compliant and unique in the {virt} environment. {project-first} automatically adjusts noncompliant VM names in the target cluster. Alternatively, you can rename target VMs in the {project-short} UI. For information about renaming VMs, see xref:proc_renaming-vms-for-migration_{context}[Renaming virtual machines]. Windows-specific considerations:: -* *VSS requirement for Windows warm migrations:* For VMs running Microsoft Windows, the Volume Shadow Copy Service (VSS) inside the guest VM is used to quiesce the file system and applications. When performing a warm migration of a Microsoft Windows VM from {vmw}, you must start VSS on the Windows guest OS for the snapshot and `Quiesce guest file system` to succeed. If you do not start VSS on the Windows guest OS, the snapshot creation during the Warm migration fails with the following error: +* *VSS requirement for Windows warm migrations:* For VMs running Microsoft Windows, the Volume Shadow Copy Service (VSS) inside the guest VM is used to quiesce the file system and applications. When performing a warm migration of a Microsoft Windows VM from {vmw}, you must start VSS on the Windows guest operating system for the snapshot and `Quiesce guest file system` to succeed. If you do not start VSS on the Windows guest operating system, the snapshot creation during the Warm migration fails with the following error: + ---- An error occurred while taking a snapshot: Failed to restart the virtual machine @@ -34,7 +34,7 @@ link:https://issues.redhat.com/browse/MTV-1548[(MTV-1548)] Operating system compatibility:: -* *Limited support for dual-boot OS VMs:* {project-short} has limited support for the migration of dual-boot OS VMs. In the case of a dual-boot OS VM, {project-short} attempts to convert the first boot disk it finds. Alternatively, you can specify the root device in the {project-short} UI. +* *Limited support for dual-boot operating system VMs:* {project-short} has limited support for the migration of dual-boot operating system VMs. In the case of a dual-boot operating system VM, {project-short} attempts to convert the first boot disk it finds. Alternatively, you can specify the root device in the {project-short} UI. diff --git a/documentation/modules/ref_source-vm-prerequisites.adoc b/documentation/modules/ref_source-vm-prerequisites.adoc index 07a31758223..b3e21ed7224 100644 --- a/documentation/modules/ref_source-vm-prerequisites.adoc +++ b/documentation/modules/ref_source-vm-prerequisites.adoc @@ -13,11 +13,11 @@ Prerequisites:: * ISO images and CD-ROMs are unmounted. * Each NIC contains an IPv4 address, an IPv6 address, or both. -* The OS of each VM is certified and supported as a guest OS for conversions. +* The operating system of each VM is certified and supported as a guest operating system for conversions. + [NOTE] ==== -You can check that the OS is supported by referring to the table in link:https://access.redhat.com/articles/1351473[Converting virtual machines from other hypervisors to KVM with virt-v2v]. See the columns of the table that refer to RHEL 8 hosts and RHEL 9 hosts. +You can check that the operating system is supported by referring to the table in link:https://access.redhat.com/articles/1351473[Converting virtual machines from other hypervisors to KVM with virt-v2v]. See the columns of the table that refer to RHEL 8 hosts and RHEL 9 hosts. ==== diff --git a/documentation/modules/rn-2-10-5-resolved-issues.adoc b/documentation/modules/rn-2-10-5-resolved-issues.adoc index b60dddab871..5828ad6f331 100644 --- a/documentation/modules/rn-2-10-5-resolved-issues.adoc +++ b/documentation/modules/rn-2-10-5-resolved-issues.adoc @@ -9,9 +9,9 @@ [role="_abstract"] Review the resolved issues in this release of {project-short}. -A fatal error occurred when migrating win-virtio VMs:: +An unrecoverable error occurred when migrating win-virtio VMs:: -Before this update, migrations of `win-virtio` VMs failed due to a `MEMORY MANAGEMENT` fatal error. With this release, the error no longer occurs and migrations of `win-virtio` VMs work as expected. +Before this update, migrations of `win-virtio` VMs failed due to a `MEMORY MANAGEMENT` unrecoverable error. With this release, the error no longer occurs and migrations of `win-virtio` VMs work as expected. + link:https://issues.redhat.com/browse/MTV-4534[MTV-4534] diff --git a/documentation/modules/storage-support.adoc b/documentation/modules/storage-support.adoc index 17a44551c99..4bfe442738c 100644 --- a/documentation/modules/storage-support.adoc +++ b/documentation/modules/storage-support.adoc @@ -14,27 +14,27 @@ |=== |Provisioner |Volume mode |Access mode -|kubernetes.io/aws-ebs +|Kubernetes.io/aws-ebs |Block |ReadWriteOnce -|kubernetes.io/azure-disk +|Kubernetes.io/azure-disk |Block |ReadWriteOnce -|kubernetes.io/azure-file +|Kubernetes.io/azure-file |Filesystem |ReadWriteMany -|kubernetes.io/cinder +|Kubernetes.io/cinder |Block |ReadWriteOnce -|kubernetes.io/gce-pd +|Kubernetes.io/gce-pd |Block |ReadWriteOnce -|kubernetes.io/hostpath-provisioner +|Kubernetes.io/hostpath-provisioner |Filesystem |ReadWriteOnce @@ -50,11 +50,11 @@ |Block |ReadWriteOnce -|kubernetes.io/rbd +|Kubernetes.io/rbd |Block |ReadWriteOnce -|kubernetes.io/vsphere-volume +|Kubernetes.io/vsphere-volume |Block |ReadWriteOnce |=== @@ -82,11 +82,11 @@ If your migration uses block storage and persistent volumes created with an EXT4 ==== When you migrate from {osp}, or when you run a cold migration from {rhv-full} to the {ocp} cluster that {project-short} is deployed on, the migration allocates persistent volumes without CDI. In these cases, you might need to adjust the file system overhead. -If the configured file system overhead, which has a default value of 10%, is too low, the disk transfer will fail due to lack of space. In such a case, you would want to increase the file system overhead. +If the configured file system resource usage, which has a default value of 10%, is too low, the disk transfer will fail due to lack of space. In such a case, you would want to increase the file system overhead. In some cases, however, you might want to decrease the file system overhead to reduce storage consumption. -You can change the file system overhead by changing the value of the `controller_filesystem_overhead` in the `spec` portion of the `forklift-controller` CR, as described in xref:configuring-mtv-operator_{context}[Configuring the MTV Operator]. +You can change the file system resource usage by changing the value of the `controller_filesystem_overhead` in the `spec` portion of the `forklift-controller` CR, as described in xref:configuring-mtv-operator_{context}[Configuring the MTV Operator]. ==== diff --git a/documentation/modules/vddk-validator-containers.adoc b/documentation/modules/vddk-validator-containers.adoc index ac301f7e24b..f382c859b01 100644 --- a/documentation/modules/vddk-validator-containers.adoc +++ b/documentation/modules/vddk-validator-containers.adoc @@ -11,7 +11,7 @@ If you have the link:https://docs.openshift.com/container-platform/{ocp-version} You can see the defaults, which you can override in the `ForkliftController` custom resource (CR), listed as follows. If necessary, you can adjust these defaults.  -These settings are highly dependent on your environment. If there are many migrations happening at once and the quotas are not set enough for the migrations, then the migrations can fail. This can also be correlated to the `MAX_VM_INFLIGHT` setting that determines how many VMs/disks are migrated at once. +These settings are highly dependent on your environment. If there are many migrations happening at once and the quotas are not set enough for the migrations, then the migrations can fail. This can also be correlated to the `MAX_VM_INFLIGHT` setting that determines how many VMs or disks are migrated at once. The following defaults can be overriden in the `ForkliftController` CR: