How to pass DHCP / PXE through to a virtual machine

Hi everybody,

that’s a post I try to write since a few month. It’s related to an issue or misunderstanding which a customer of mine had.

He wanted to try to get a PXE Boot triggered by DHCP throw a virtual Machine Hosted on Hyper-V. For those of us who are familiar with vitualization, that sounds very simple because the solutions was, he didn’t tagged all VLANs on the Switch and virtual Machine.

For those who are not that familiar, I want to give you a short list what you need to do, to get traffic through you physical and virtual switches right to you virtual machines.

 

Physical Switch Configuration

First thing you need to do, is to tag all VLANs were your virtual Machines will have access to, to the physical ports of you Hyper-V Host and virtual Switch is connected too.

As example: You have one virtual machine in VLAN 10 and one in VLAN 233. Both need connect to your physical network. You Hyper-V virtual Switch is connected to Switch 1 on Port 12 and Switch 2 on Port 14. That means you need to tag VLAN 10 and VLAN 233 on Switch 1 Port 12 and Switch 2 Port 14.

Virtual Switch Configuration

Now you need to configure the virtual switch and that’s the point most people don’t see while working with virtualization. In nearly all Hypervisors you have an operation softwarebased layer 2 switch running. That switch needs to be configured too. That is mostly done via virtual machine settings.

In our example we need to set the VLAN Tag on the switch for a virtual machine on Hyper-V. To do so, you need to change the settings for the virtual machine network interface.

lan

 

You can also configure the switch for VLAN trunking. My Bro Charbel wrote a great blog about how to configure the virtual switch in that way. What is VLAN Trunk Mode in Hyper-V?

In our example you need to know one more thing. In Generation 1 Hyper-V VMs only the legacy network adapter is able to perform a PXE boot.

Hyper-V|W2k12R2|8x1GB|2x10GB

Hyper-V Cluster Network configuration with following parameters:

The following configuration leverages 8x 1GB Ethernet NICs and LOM (LAN on Motherboard) and 2x 10GB Ethernet NICs. The storage can be connected via iSCSI with MPIO or SMB 3.x.x without RDMA. The configurations uses physical configuration and software defined / converged network for Hyper-V.


 Pro’s and Con’s of that solution

 Pro Con
– Good Bandwidth for VM
– Good Bandwidth for Storage
– Separated NICs for Livemigration, Cluster and Management
– Full fault redundant
– Can be used in switch independent or LACP (with stacked switches) teaming mode
– Only one Hardware Technology is used
– Network becomes limited by a large number of VMs
– Combination between Hardware Defined and Software Defined Network is sometimes hardly to understand

 Switches

1GBE SW01 1 GBit/s physical stacked or independed
1GBE SW02 1 GBit/s physical stacked or independed
10GBE SW01 10 GBit/s physical stacked or independed
10GBE SW02 10 GBit/s physical stacked or independed
SoftSW01 10 GBit/s Software defined / converged
SoftSW02 10 GBit/s Software defined / converged

 Neccessary Networks

Networkname VLAN IP Network (IPv4) Connected to Switch
Management 10 10.11.10.0/24 SoftSW01
Cluster 11 10.11.11.0/24  SoftSW01
Livemigration 45 10.11.45.0/24 1GBE SW01 / 1GBE SW02
With iSCSI – Storage 40 10.11.40.0/24 10GBE SW01 / 10GBE SW02
With SMB – Storage 5051 10.11.50.0/2410.11.51.0/24 10GBE SW0110GBE SW02
Virtual Machines 200 – x 10.11.x.x/x 1GBE SW01 / 1GBE SW02

 Possible rearview Server


 Schematic representation


Switch Port Configuration

   

Bandwidth Configuration vNICs

vNIC min. Bandwidth Weight PowerShell Command
Management 10%
Cluster 5%

QoS Configuration Switch

Networkname Priority
Management medium
Cluster high
Storage high
Livemigration medium
VMs dependig on VM Workload

 

HyperV|W2k12R2|3x1GB

Done use that configuration in production! It’s only for lab use!

This configuration is very similar to the 4x 10Gb Ethernet config but not useable for production.

Hyper-V Cluster Network configuration with following parameters:

The following configuration leverages 3x 1GB Ethernet NICs and LOM (LAN on Motherboard). The storage can be connected via iSCSI with MPIO or SMB 3.x.x without RDMA. The configurations uses physical configuration and software defined / converged network for Hyper-V.


 Pro’s and Con’s of that solution

 Pro Con
– Easy to build and good for lab use
– enougth bandwidth for test workloads
– fully converged
– limited bandwidth
– no production use
– cheap to build

 Switches

1GBE SW01 1 GBit/s physical stacked or independed
1GBE SW02 1 GBit/s physical stacked or independed
SoftSW01 1 GBit/s Software defined / converged

 Neccessary Networks

Networkname VLAN IP Network (IPv4) Connected to Switch
Management 100 10.11.100.0/24 SoftSW01
Cluster 101 10.11.101.0/24  SoftSW01
Livemigration 450 10.11.45.0/24 SoftSW01
Storage 400 10.11.40.0/24 SoftSW01
Virtual Machines 200 – x 10.11.x.x/x SoftSW01

 Possible rearview Server


 Schematic representation


Switch Port Configuration


Bandwidth Configuration vNICs

vNIC min. Bandwidth Weight PowerShell Command
Management 10%
Cluster 5%
iSCSI 30%
Livemigration 30%

QoS Configuration Switch

Networkname Priority
Management medium
Cluster high
Storage high
Livemigration medium
VMs dependig on VM Workload

 

Hyper-V|W2k12R2|4x10GB

Hyper-V Cluster Network configuration with following parameters:

The following configuration leverages 4x 10GB Ethernet NICs and LOM (LAN on Motherboard). The storage can be connected via iSCSI with MPIO or SMB 3.x.x without RDMA. The configurations uses physical configuration and software defined / converged network for Hyper-V.


 Pro’s and Con’s of that solution

 Pro Con
– High Bandwidth for VM- Good Bandwidth for Storage
– Software Defined Network can be fully controlled from the Hypervisor
– Full fault redundant
– Can be used in switch independent or LACP (with stacked switches) teaming mode
– Only one Hardware Technology is used
– Becomes limited by a large number of VMs
– Network admins won’t like it
– Combination between Hardware Defined and Software Defined Network is sometimes hardly to understand

 Switches

10GBE SW01 10 GBit/s physical stacked or independed
10GBE SW02 10 GBit/s physical stacked or independed
SoftSW01 10 GBit/s Software defined / converged

 Neccessary Networks

Networkname VLAN IP Network (IPv4) Connected to Switch
Management 100 10.11.100.0/24 SoftSW01
Cluster 101 10.11.101.0/24  SoftSW01
Livemigration 450 10.11.45.0/24 SoftSW01
Storage 400 10.11.40.0/24 SoftSW01
Virtual Machines 200 – x 10.11.x.x/x SoftSW01

 Possible rearview Server


 Schematic representation


Switch Port Configuration


Bandwidth Configuration vNICs

vNIC min. Bandwidth Weight PowerShell Command
Management 10%
Cluster 5%
iSCSI 30%
Livemigration 30%

QoS Configuration Switch

Networkname Priority
Management medium
Cluster high
Storage high
Livemigration medium
VMs dependig on VM Workload

 

HyperV|W2k12R2|4x1GB|4x10GB

Hyper-V Cluster Network configuration with following parameters:

The following configuration leverages 4x 1GB Ethernet and 4x 10GB Ethernet NICs and LOM (LAN on Motherboard). The storage can be connected via iSCSI with MPIO or SMB 3.x.x without RDMA. The configurations uses physical configuration and software defined / converged network for Hyper-V.


 Pro’s and Con’s of that solution

 Pro Con
– High Bandwidth for VM- Good Bandwidth for Storage
– Separated Management and Heartbeat Interface
– Full fault redundant
– Can be used in switch independent or LACP (with stacked switches) teaming mode
– Limited Bandwidth for Livemigration
– a lot of NICs and Switches needed

 Switches

Switch name Bandwidth Switchtyp
1GBE SW01 1 GBit/s physical stacked or independed
1GBE SW02 1 GBit/s physical stacked or independed
10GBE SW01 10 GBit/s physical stacked or independed
10GBE SW02 10 GBit/s physical stacked or independed
SoftSW01 1 GBit/s Software defined / converged
SoftSW02 10 GBit/s  Software defined / converged

 Neccessary Networks

Networkname VLAN IP Network (IPv4) Connected to Switch
Management 100 10.11.100.0/24 SoftSW01
Cluster 101 10.11.101.0/24  SoftSW01
Livemigration 450 10.11.45.0/24  1GBE SW01 & 1GBE SW02
Storage 400 10.11.40.0/24  10GBE SW01 & 10GBE SW02
Virtual Machines 200 – x 10.11.x.x/x  SoftSW02

 Possible rearview Server

RearSvr01


 Schematic representation

NIC01 NIC02

Switch Port Configuration

 sw1gbe sw10gbe

Bandwidth Configuration vNICs

vNIC min. Bandwidth Weight PowerShell Command
Management 40%
Cluster 10%

QoS Configuration Switch

Networkname Priority
Management medium
Cluster high
Storage high
Livemigration medium
VMs dependig on VM Workload