Once you have added your hosts to vSphere it’s time to connect all the hosts to a centralized networking configuration so we can move the guests between hosts for maintenance and DRS. These are the basics- for more detailed information see the vSphere vMotion Networking Requirements documentation from VMware
Step 1: Create a Distributed Switch for vMotion Traffic
- Navigate to Networking
- Click the Actions Dropdown
- Select Distributed Switch
- Chose New Distributed Switch
Step 2: Configure the Distributed Switch
Name the Distributed Switch something that will let you know what it is and what it does. [“ds – for distributed switch and vMotion for the service]
Click NEXT
Select the version that will support the oldest version of ESXi in the environment.
Click NEXT
- Chose the number of Physical Uplinks [1 for vMotion is plenty]
- Enable Network I/O Control (why)
- Select Create s default port group
- Name Name the Port [pg-XXXXX]
Click NEXT
Review your settings
Click FINISH
Step 3: Edit the Properties of the Distributed Switch
- Right-Click the new DS
- Select Settings
- Click Edit Settings
From the Distributed Switch – Edit Settings window-
- Select the Advanced tab
- Increase the MTU as high as the physical network will permit. [My USB NIC devices are capped at 4000]
- Click OK
[ Read more about Jumbo Frames ]
Step 4: Edit the Properties of the Port Group
- Right-Click the new PG
- Click Edit Settings
Under General change the Port Binding to Ephemeral – no binding [This will help later when we install NSX-T]
Click OK
Step 5: Connect the Hosts to the Distributed Switch
From Networking, Right-Click on the vMotion DS and click Add and Manage Hosts…
Select the Add hosts radio button and click NEXT
Click New hosts…
Select the hosts to be added to this DS [generally all the hosts in that cluster]
Click OK
On the next screen click NEXT
Step 6: Assign Uplinks
Select the available Physical Adapter you want to use on each host to use for Uplink(s) for this DS.
Click the Assign uplink button
Select the available Uplink you want to assign to this Physical Adapter
If all your host are of the same configuration, you can click the Apply this uplink assignment to the rest of the hosts and it will automatically configure the other hosts connected to this DS similarly.
If you selected Apply this uplink assignment to the rest of the hosts you will see that all hosts have been configured to connect the same adapter to the DS using the same uplink
Verify your configuration and click NEXT
Click NEXT on the Manage VMkernel adapters screen [we will configure those later]
Click NEXT on the Migrate VM Networking screen [there should be no networking to migrate for vMotion]
Click FINISH
Verify all the Hosts are connected to the DS by selecting the DC and clicking the Hosts tab.
All desired hosts should be present.
Step 7: Add VMkernal Adapters
From Networking, Right-Click on the vMotion PG and click Add VMkernel Adapters…
On the Select member Hosts window, click Attached hosts… and select all the hosts attached to the PG.
Click OK
Verify all the hosts and click NEXT
On the Configure VMkernel adapter window, select vMotion from the TCP/IP stack drop down.
Click NEXT
NOTE: vMotion should use non-routable IP space for isolation.
Enter the IP information for the vMotion interface on each host. There is an auto fill feature to make this process faster in large clusters.
Set the Gateway Configuration Type to Do not configure
Click NEXT
Verify the information is correct
Click FINISH
Select the vMotion PG and click on the Ports tab.
You should see each host connected to the PG and the status should be UP
From Hosts and Clusters, the Networking / VMkernel adapters for each host should show the same vmk{#} with a corresponding IP and vMotion enabled.
Step 8: Test VMK Connectivity Between Hosts
- Enable SSH on he hosts
- SSH to the host and test VMK Connectivity using the
vmkping
command [more about vmkping]
vmkping -I vmk{#} -d -s {MTU-28} -S vmotion {vmk ipaddress of other host}
VMKPING RESULTS FOR HOST 1 TO HOSTS 2 & 3
[root@ESX-01:~] vmkping -I vmk1 -d -s 3972 -S vmotion 172.16.0.102 PING 172.16.0.102 (172.16.0.102): 3972 data bytes 3980 bytes from 172.16.0.102: icmp_seq=0 ttl=64 time=0.737 ms 3980 bytes from 172.16.0.102: icmp_seq=1 ttl=64 time=0.649 ms 3980 bytes from 172.16.0.102: icmp_seq=2 ttl=64 time=0.670 ms --- 172.16.0.102 ping statistics --- 3 packets transmitted, 3 packets received, 0% packet loss round-trip min/avg/max = 0.649/0.685/0.737 ms [root@ESX-01:~] vmkping -I vmk1 -d -s 3972 -S vmotion 172.16.0.103 PING 172.16.0.103 (172.16.0.103): 3972 data bytes 3980 bytes from 172.16.0.103: icmp_seq=0 ttl=64 time=1.184 ms 3980 bytes from 172.16.0.103: icmp_seq=1 ttl=64 time=0.872 ms 3980 bytes from 172.16.0.103: icmp_seq=2 ttl=64 time=0.699 ms --- 172.16.0.103 ping statistics --- 3 packets transmitted, 3 packets received, 0% packet loss round-trip min/avg/max = 0.699/0.918/1.184 ms
VMKPING RESULTS FOR HOST 2 TO HOSTS 1 & 3
[root@ESX-02:~] vmkping -I vmk1 -d -s 3972 -S vmotion 172.16.0.101 PING 172.16.0.101 (172.16.0.101): 3972 data bytes 3980 bytes from 172.16.0.101: icmp_seq=0 ttl=64 time=0.654 ms 3980 bytes from 172.16.0.101: icmp_seq=1 ttl=64 time=0.614 ms 3980 bytes from 172.16.0.101: icmp_seq=2 ttl=64 time=0.666 ms --- 172.16.0.101 ping statistics --- 3 packets transmitted, 3 packets received, 0% packet loss round-trip min/avg/max = 0.614/0.645/0.666 ms [root@ESX-02:~] vmkping -I vmk1 -d -s 3972 -S vmotion 172.16.0.103 PING 172.16.0.103 (172.16.0.103): 3972 data bytes 3980 bytes from 172.16.0.103: icmp_seq=0 ttl=64 time=1.052 ms 3980 bytes from 172.16.0.103: icmp_seq=1 ttl=64 time=0.810 ms 3980 bytes from 172.16.0.103: icmp_seq=2 ttl=64 time=0.699 ms --- 172.16.0.103 ping statistics --- 3 packets transmitted, 3 packets received, 0% packet loss round-trip min/avg/max = 0.699/0.854/1.052 ms
- Continue this process for each of the hosts.