Connect with us

Gadgets

How to Migrate VMware to Hyper-V (1): Environment Design

[ad_1]

Welcome to the first part of my 12-part series, which will walk you through the steps to migrate VMware VMs to Hyper-V using SCVMM. I wrote this series to be suitable for a production or lab environment.

This series is also suitable for deploying a Hyper-V/VMM cluster.

In this first part, you will plan your environment.

The tasks in this first part of the hands-on deployment guide are for planning. Apart from downloading and preparing the required installation files, all subsequent tasks guide you through planning your deployment. Hyper-V Clustering/SCVMM deployment is complex, so planning is essential to ensure a successful deployment.

Task 1: Download Installation Files

Before downloading the required installation files, create the folders to save the files using this PowerShell script.

Run this script on a server where you intend to save the installation files. You need about 10 GB of free space on the drive where you create these folders for storing the installation files, which you will download shortly.

Get-Disk #lists all available disks in the server
Get-Partition -DiskNumber 1 #displays the partition
New-Item -Name Hyper-VFiles -ItemType Directory -Path E: #create a folder in drive E:

#Create subfolders in the Hyper-VFiles folder
#Change the Path parameter to the folder

“WinServer22”, “SQLServer22”, “SCVMMServer22”, “Drivers”, "SCVMM UR2", "WAC", "Veeam" | foreach-object {
New-Item -Name $_ -ItemType Directory -Path E:Hyper-VFiles }

#Create the share

$Parameters = @{
Name="Hyper-VFiles"
Path="E:Hyper-VFiles"
FullAccess="labAdministrator" }
New-SmbShare @Parameters

Download the following files and save them in the folders created with the above PowerShell script:

Download the following files and save them in the folders created with the above PowerShell script:

  1. Windows Server 2022 ISO Image
    – this link downloads an evaluation copy of Windows
    – If you’re installing in a production, install a retail copy
    – I found out that converting a Windows Server 2022 Standard Server Core to full edition did not work as of August 2024 when I wrote this guide
    – The conversion works if you install an evaluation copy with the Desktop Experience (full GUI).
  2. SQL Server 2022 Exe 
    – this link downloads an evaluation copy of SQL Server 2022
  3. SCVMM prerequisite tools
    a) Windows ADK for Windows 10, version 1809 10.1.17763.1
    b) Windows PE add-on for ADK, version 1809 10.1.17763.1
    c) Microsoft ODBC Driver 17  for SQL Server
    e) SQL Server Management Studio
    f) Microsoft Visual C++ 2013 Redistributable (x86) – 12.0.30501
    g) Visual C++ 2013 Redistributable (x64)
  4. SCVMM exe
    – this link downloads an evaluation copy of SCVMM 2022
  5. Veeam Backup and Replication
    a) Requires a business email to download
    b) Download the Veeam installation files
    c) Request for a 30-day trial license
  6. SCVMM UR2
    a) SCVMM Update Rollup 2
    b) SCVMM Admin Console R2 Update
    c) SCVMM Guest Agent R2 Update
    d) Registry key script
    When you download the files, rename them as shown in a), b), and c)
  7. Windows Admin Center
    a) If you’re deploying Hyper-V on Windows Server Core, you require WAD to manage the servers
    b) Windows Admin Center is free
I included the Veeam backup and recovery download because parts 10a and 10b of this guide explain how to install and configure Veeam and use Veeam Instant Recovery to migrate VMware VMs to Hyper-V VMs.

Task 2: Prepare Installation Files

The SCVMM and Microsoft SQL downloads from the Task 1 section are executables. I will show you how to create the installation files using the downloaded executables in the two subsections below.

Task 2.1: Download the SQL ISO Image

Ideally, you should be able to install SQL with the downloaded executable file. However, when I tried doing that, I got the error message, “SQL server 2020 oops, a required file could not be downloaded.”

The walkaround is to download the ISO image of the SQL Server installation. Here are the steps:

  1. Double-click the executable file.
Task 2.1: Prepare the SQL Installation Files - step 1 Double-click the executable file.
  1. Then, on the first page of the wizard, select Download Media.
Task 2.1: Prepare the SQL Installation Files - step 2 Then, on the first page of the wizard, select Download Media. Task 2.1: Prepare the SQL Installation Files - step 2 Then, on the first page of the wizard, select Download Media.
  1. After that, the Specify SQL Server Installation Download screen loads. Select the language, the package as ISO, and then the path to save the ISO image.

When you finish entering the details and selections, click the Download button.

When you finish entering the details and selections, click the Download button. When you finish entering the details and selections, click the Download button.

You should see an ISO file when the download is completed – see my screenshot below.

  1. Double-click the SCVMM executable file you downloaded earlier.
  1. After that, click Next on the first page, and then accept the license agreement. Then, enter the path to store the SCVMM installation files.

To make it easy for you to locate the installation files and separate them from other files, enter a folder name – I used SCVMM – after the main SCVMMServer22 folder.

After that, click Next on the first pageAfter that, click Next on the first page
and accept the license agreement. Finally, enter the path to store the SCVMM installation files. and accept the license agreement. Finally, enter the path to store the SCVMM installation files.
  1. Finally, to extract the installation files, click the Extract button.
Finally, to extract the installation files, click the Extract button. Finally, to extract the installation files, click the Extract button.

Task 3: Plan Servers and Network Interface Cards

For this project, you require at least two physical servers. The two servers will be used to create a Hyper-V cluster, which will eventually be managed by the Service Center Virtual Manager (SCVMM). 

In production, each server should have two network adapters connected to different VLANs—more on VLANs shortly. Moreover, each network adapter on a VLAN should be connected to separate physical switches to provide redundancy. 

Where possible, also plan for network card redundancy. By this, I mean that you should not use two ports on the same physical adapters for teaming – if the adapter fails, the server loses connection to that VLAN, and the service it provides becomes unavailable. 

Having said all that, I’ll use an HP ENVY x360 Convertible laptop and an HP EliteDesk 800 G2 SFF in my home test lab as my Hyper-V hosts.  I also have a Dell Latitude E7470 running Windows Server 2022 Standard that provides iSCSI storage. 

If you need to buy these (or similar devices) for a home lab setup, the table below specifies the specs. The Specs and configuration table below:

Whether deploying in a production environment or building a lab, use these tables to plan your hardware and define the host’s network settings.

Make/Model Total RAM CPU Storage
HP ENVY x360 Convertible 32 GB Intel Core i7-7500U 2.70GHz 1×250 GBSSD, 1×1 TB HDD
HP EliteDesk 800 G2 SFF 32 GB Intel Core i5-6500 3.2 GHz 1×256 GB SSD
Dell Latitude E7470 16 GB Intel Core i7-6600U 2.60GHz 1x 2 TB SSD (internal), 2x 1 TB external USB disks
Table 1.1

This table lists the hostname and network settings of the 3 hosts.

Computer Make/Model Host Name IP Address DNS Server
HP ENVY x360 Convertible IPMpHPV4 192.168.0.104 192.168.0.80
HP EliteDesk 800 G2 SFF IPMpHPV5 192.168.0.105 192.168.0.80
Dell Latitude E7470 IPMpiSCSI2 192.168.0.109 192.168.0.80
Table 1.2

Task 4: Plan Servers pNICs for Redundancy

If you’re deploying in a production environment, you should have multiple physical switches with VLANs for each type of workload. For example, there should be a different VLAN for Hyper-V host management, cluster traffic, and live migration. 

Additionally, if you use iSCSI, there should be a different VLAN for this traffic. 

Earlier, I said your production environment should have multiple physical switches (at least 2). Having more than one switch allows for redundancy and avoids a single point of failure.

Speaking of redundancy design, the physical network interface cards (pNICs) on the servers (HyperV hosts) should also be designed for redundancy. Most servers should be equipped with multiple pNICs – each offering multiple ports.

To improve throughput and provide further redundancy, the NICs for each Hyper-V cluster workload should be teamed in pairs. However, the NIC teaming should be planned to avoid a single network card risking a single point of failure. 

To avoid this, the ports on the first pNIC should be teamed with ports on the second pNIC.

Task 5: Plan pNIC Teaming Configuration

Moving on to the configuration of the physical switch ports, Hyper-V teaming only supports port trunking. Below is a sample configuration from Switch Configuration Examples for Microsoft SDN – Physical port configuration.

Here is the text from the above link – “Each physical port must be configured to act as a switchport and have the mode set to trunk to allow multiple VLANs to be sent to the host. For RDMA Priority-flow-control must be on and the service-policy must point to the input queue that you will define below.”

interface Ethernet1/3/1
speed 10000
priority-flow-control mode on
switchport mode trunk
switchport trunk native vlan VLAN_ID
switchport trunk allowed vlan VLAN_ID_RANGE
spanning-tree port type edge trunk
service-policy type queuing input INPUT_QUEUING
no shutdown

Note that Hyper-V teaming does not support LACP. If your network guys configure the ports on the switch as LACP, the Hyper-V teamed switch will not work. 

Speaking of teamed switches, Hyper-V in Windows Server 2022 supports a virtual switch called Switched Embedded Team (SET). This type of nic teaming is configured and managed in Hyper-V.

Hyper-V SET operates in a switch-independent mode. This means that it handles the teaming, unlike in LACP, where the physical switch handles the teaming and load balancing. 

After everything I have said about planning server pNIC configuration, physical switch connections, and VLANs for redundancy, I will use a simple configuration in my lab. My home lab does not have redundancy devices, as described in this and the last subsections.

So, the PCs I’m using as my Hyper-V hosts each have one pNIC. This pNIC will serve all traffic on a single network, 192.168.0.0/24. 

If you’re building a 2-node Hyper-V cluster in production, your hosts should have the following pNIC setup:

Host Name Default pNIC Names* New pNIC Names Network 1, Network 2
IPMpHPV4 Network 1,Network 2 Mgt-pNIC-1, Mgt-pNIC-2 Mgt-vSwitch
Network 3, Network 4 Clu-pNIC-1,
Clu-pNIC-2
Clu-vSwitch
Network 5, Network 6 Lmg-pNIC-1, Lmg-pNIC-2 Lmg-vSwitch
Network 7*** Str-vSwitch Str-vSwitch
IPMpHPV5 Network 1,Network 2 Mgt-pNIC-1, Mgt-pNIC-2 Mgt-vSwitch
Network 3,Network 4 Clu-pNIC-1,
Clu-pNIC-2
Clu-vSwitch
Network 5,Network 6 Lmg-pNIC-1, Lmg-pNIC-2 Lmg-vSwitch
Network 7 Str-vSwitch Str-vSwitch
Table 1.3

*I’m using Network 1, Network 2, etc, as the default names of the physical network adapters. In reality, the names will be different.
**See the next section for the virtual switch planning details
***This is for the iSCSI traffic. I’m assuming that the nic will not be teamed

To plan your deployment, use the real names of the pNICs on your Hyper-V servers. You do not need to rename the NICs at this stage, as that will be done later in the guide.

For my Hyper-V lab, I have the simplified version of the configuration below:

Host Name Default pNIC Names* New pNIC Names Hyper-V SET Name**
IPMpHPV4 Network Mgt-pNIC Mgt-vSwitch
IPMpHPV5 Network Mgt-pNIC Mgt-vSwitch
Table 1.4

Task 6: Plan Host Networking Configuration

Use the table below to plan the network configuration of the Hyper-V hosts

Host Name IP Address Subnet Mask Default Gateway* Preferred DNS
IPMpHPV4 192.168.0.101 255.255.255.0 192.168.0.1 192.168.0.80
  192.168.1.11 255.255.255.0 Not Applicable Not Applicable
  192.168.2.11 255.255.255.0 Not Applicable Not Applicable
  192.168.3.12 255.255.255.0 Not Applicable Not Applicable
IPMpHPV5 192.168.0.102 255.255.255.0 192.168.0.1 192.168.0.80
  192.168.1.12 255.255.255.0 Not Applicable Not Applicable
  192.168.2.12 255.255.255.0 Not Applicable Not Applicable
  192.168.3.13 255.255.255.0 Not Applicable Not Applicable
Table 1.5

Use the table below to plan the networking configuration for the cluster resource name, VMM V, and Cluster File Server Networking.

Resource Name Resource Purpose VM IP address Subnet Mask Default Gateway* Preferred DNS
IPMvVMM VMM VM 192.168.0.106 255.255.255.0 192.168.0.1 192.168.0.80
lab-cluster-2 Cluster resource name 192.168.0.107 255.255.255.0 192.168.0.1 192.168.0.80
lab-vmm-lib Cluster file server name 192.168.0.108 255.255.255.0 192.168.0.1 192.168.0.80
Table 1.6

Task 7: Plan Hyper-V Virtual Switches and Storage

Having completed the server pNIC, the physical switch configuration, and VLAN planning, it is time to plan for the Hyper-V virtual switches. 

As I already hinted, if you’re setting this in a production environment, it is recommended to have teamed pNICs designated for different workloads. These teamed networks will connect to Hyper-V Switch Embedded Teams virtual switches. 

The table below shows a sample configuration of a typical prod environment and the configuration for a home lab setup.

Virtual Switch Name Type Purpose Connected to pNIC
Mgt-vSwitch External Management/VM traffic Teamed pNICs connected to the management traffic VLAN
Clu-vSwitch External Cluster traffic Teamed pNICs connected to the cluster traffic VLAN
Lmg-vSwitch External Live Migration Traffic Teamed pNICs connected to the Live Migration VLAN
Str-vSwitch External iSCSI Traffic Teamed pNICs connected to the iSCSI traffic VLAN
Table 1.7
Virtual Switch Name Type Purpose Connected to pNIC
Mgt-vSwitch External All traffic A single pNIC connected to my home hub
Table 1.8

If you do not have separate VLANs for cluster and live migration traffic, you may plan for a single vSwitch for the two workloads. 
This table, used in my lab, is for guidance purposes. You should use bigger storage capacities for the VMM Shared Library and the Clustered Shared Volume in production.

However, for the Cluster Quorum witness disk, 1 GB is enough.

iSCSI Virtual disk name Size Purpose
QuorumvDisk 1 GB Cluster Quorum witness disk
VMMLibvDisk 200GB VMM Shared Library
CSVvDisk 720GB Clustered Shared Volume
Table 1.9

Deploying a Hyper-V/SCVMM cluster is a complex process involving multiple tasks. Moreover, some tasks must be completed before the next ones. I have arranged the tasks in this guide in the order that will ensure seamless deployment. If you skip a step—including the planning steps in this part—it is almost certain that you will run into problems later.

Congratulations! You have completed the planning part of this hands-on guide. Proceed to part 2 – Prep Hyper-V Hosts.

[ad_2]

Victor Ashiedu

Source link