HyperV|W2k12R2|4x1GB|4x10GB

Hyper-V Cluster Network configuration with following parameters:

The following configuration leverages 4x 1GB Ethernet and 4x 10GB Ethernet NICs and LOM (LAN on Motherboard). The storage can be connected via iSCSI with MPIO or SMB 3.x.x without RDMA. The configurations uses physical configuration and software defined / converged network for Hyper-V.


 Pro’s and Con’s of that solution

 Pro Con
– High Bandwidth for VM- Good Bandwidth for Storage
– Separated Management and Heartbeat Interface
– Full fault redundant
– Can be used in switch independent or LACP (with stacked switches) teaming mode
– Limited Bandwidth for Livemigration
– a lot of NICs and Switches needed

 Switches

Switch name Bandwidth Switchtyp
1GBE SW01 1 GBit/s physical stacked or independed
1GBE SW02 1 GBit/s physical stacked or independed
10GBE SW01 10 GBit/s physical stacked or independed
10GBE SW02 10 GBit/s physical stacked or independed
SoftSW01 1 GBit/s Software defined / converged
SoftSW02 10 GBit/s  Software defined / converged

 Neccessary Networks

Networkname VLAN IP Network (IPv4) Connected to Switch
Management 100 10.11.100.0/24 SoftSW01
Cluster 101 10.11.101.0/24  SoftSW01
Livemigration 450 10.11.45.0/24  1GBE SW01 & 1GBE SW02
Storage 400 10.11.40.0/24  10GBE SW01 & 10GBE SW02
Virtual Machines 200 – x 10.11.x.x/x  SoftSW02

 Possible rearview Server

RearSvr01


 Schematic representation

NIC01 NIC02

Switch Port Configuration

 sw1gbe sw10gbe

Bandwidth Configuration vNICs

vNIC min. Bandwidth Weight PowerShell Command
Management 40%
Cluster 10%

QoS Configuration Switch

Networkname Priority
Management medium
Cluster high
Storage high
Livemigration medium
VMs dependig on VM Workload