Hyper-V Cluster Network configuration with following parameters:

The following configuration leverages 4x 10GB Ethernet NICs and LOM (LAN on Motherboard). The storage can be connected via iSCSI with MPIO or SMB 3.x.x without RDMA. The configurations uses physical configuration and software defined / converged network for Hyper-V.

 Pro’s and Con’s of that solution

 Pro Con
– High Bandwidth for VM- Good Bandwidth for Storage
– Software Defined Network can be fully controlled from the Hypervisor
– Full fault redundant
– Can be used in switch independent or LACP (with stacked switches) teaming mode
– Only one Hardware Technology is used
– Becomes limited by a large number of VMs
– Network admins won’t like it
– Combination between Hardware Defined and Software Defined Network is sometimes hardly to understand


10GBE SW01 10 GBit/s physical stacked or independed
10GBE SW02 10 GBit/s physical stacked or independed
SoftSW01 10 GBit/s Software defined / converged

 Neccessary Networks

Networkname VLAN IP Network (IPv4) Connected to Switch
Management 100 SoftSW01
Cluster 101  SoftSW01
Livemigration 450 SoftSW01
Storage 400 SoftSW01
Virtual Machines 200 – x 10.11.x.x/x SoftSW01

 Possible rearview Server

 Schematic representation

Switch Port Configuration

Bandwidth Configuration vNICs

vNIC min. Bandwidth Weight PowerShell Command
Management 10%
Cluster 5%
iSCSI 30%
Livemigration 30%

QoS Configuration Switch

Networkname Priority
Management medium
Cluster high
Storage high
Livemigration medium
VMs dependig on VM Workload


Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.