How to configure cluster traffic priority on a windows server

During writing my current cluster network series, I saw some points some people normally miss when configuring a Microsoft Cluster via Failover Cluster Manager.

One thing is, that they do not prioties Cluster Networks against each other and to change the routing interface.

The following task must be done on every cluster server. We change the connection settings so that our routed traffic goes over the management interface first.

1. navigate to your network adapter properties and open the advanced settings in the menu bar.

06-07-_2015_15-27-332. in the next menu you move you management interface which will be your gateway on the highest place.

Network

 

So thats all for the routing part. For the next point you connect to a cluster node. You only need to do that operation once per cluster.

1. Check your Cluster Network. All Networks should be up and running.

Clusternetwork

2. Now you need to open and change the network metric. Here the lowest metric means, that cluster network has the highes priority. I recommand you to give cluster heartbeat traffic the highest priority because if that traffic fails, your node will go down within the cluster.

I set that configuration on a scale out fileserver, so my traffic will be priotiesed as followed:

high -> Cluster – Storage 01 – Storage 02 – Management -> low

So that means I need to run following script:

You can check the result with:

The result should look like the screen below.

metric

Thats all, only small changes but improves the stability of your clusters in a high rate.

 

 

SoFS|W2k12R2|2x1GB|4x10GB|4xFC

Scale out Fileserver Cluster Network configuration with following parameters:

The following configuration leverages 2x 1GB Ethernet, 4x 10GB Ethernet NICs and LOM (LAN on Motherboard) and 2x Fibre channel connections. There are 4 NICs useable for SMB 3.x.x Multichannel.

The storage is provisioned to the Scale out Fileserver Hosts via Fibrechannel.


 Pro’s and Con’s of that solution

 Pro Con
– High Bandwidth
– Full fault redundant
– Fibrechannel ist most common SAN technology
– many NICs
– a lot of NICs and Switches needed
– a lot of technologies involved
– no separed Team for Cluster and Management
– expensiv

 Switches

Switch name Bandwidth Switchtyp
1GBE SW01 1 GBit/s physical stacked or independed
1GBE SW02 1 GBit/s physical stacked or independed
10GBE SW01 10 GBit/s physical stacked or independed
10GBE SW02 10 GBit/s physical stacked or independed
FC SW01 4/8 GB FC/s physical stacked or independed
FC SW02 4/8 GB FC/s physical stacked or independed

 Neccessary Networks

Networkname VLAN IP Network (IPv4) Connected to Switch
Management 100 10.11.100.0/24  1GBE SW01 & 1GBE SW02 (via NIC Team & VLAN Tagging)
Cluster 101 10.11.101.0/24  1GBE SW01 & 1GBE SW02 (via NIC Team & VLAN Tagging)
SMB 01 200 10.11.45.0/24  10GBE SW01
SMB 02 200 10.11.46.0/24 10GBE SW02
SMB 03 200 10.11.47.0/24 10GBE SW01
SMB 04 200 10.11.48.0/24 10GBE SW02
FC 01 FC SW01
FC 01 FC SW01

 Possible rearview Server


 Schematic representation


Switch Port Configuration

   
 

QoS Configuration Switch

Networkname Priority
Management medium
Cluster high
SMB Traffic High

 

HyperV|W2k12R2|3x1GB

Done use that configuration in production! It’s only for lab use!

This configuration is very similar to the 4x 10Gb Ethernet config but not useable for production.

Hyper-V Cluster Network configuration with following parameters:

The following configuration leverages 3x 1GB Ethernet NICs and LOM (LAN on Motherboard). The storage can be connected via iSCSI with MPIO or SMB 3.x.x without RDMA. The configurations uses physical configuration and software defined / converged network for Hyper-V.


 Pro’s and Con’s of that solution

 Pro Con
– Easy to build and good for lab use
– enougth bandwidth for test workloads
– fully converged
– limited bandwidth
– no production use
– cheap to build

 Switches

1GBE SW01 1 GBit/s physical stacked or independed
1GBE SW02 1 GBit/s physical stacked or independed
SoftSW01 1 GBit/s Software defined / converged

 Neccessary Networks

Networkname VLAN IP Network (IPv4) Connected to Switch
Management 100 10.11.100.0/24 SoftSW01
Cluster 101 10.11.101.0/24  SoftSW01
Livemigration 450 10.11.45.0/24 SoftSW01
Storage 400 10.11.40.0/24 SoftSW01
Virtual Machines 200 – x 10.11.x.x/x SoftSW01

 Possible rearview Server


 Schematic representation


Switch Port Configuration


Bandwidth Configuration vNICs

vNIC min. Bandwidth Weight PowerShell Command
Management 10%
Cluster 5%
iSCSI 30%
Livemigration 30%

QoS Configuration Switch

Networkname Priority
Management medium
Cluster high
Storage high
Livemigration medium
VMs dependig on VM Workload

 

Hyper-V|W2k12R2|4x10GB

Hyper-V Cluster Network configuration with following parameters:

The following configuration leverages 4x 10GB Ethernet NICs and LOM (LAN on Motherboard). The storage can be connected via iSCSI with MPIO or SMB 3.x.x without RDMA. The configurations uses physical configuration and software defined / converged network for Hyper-V.


 Pro’s and Con’s of that solution

 Pro Con
– High Bandwidth for VM- Good Bandwidth for Storage
– Software Defined Network can be fully controlled from the Hypervisor
– Full fault redundant
– Can be used in switch independent or LACP (with stacked switches) teaming mode
– Only one Hardware Technology is used
– Becomes limited by a large number of VMs
– Network admins won’t like it
– Combination between Hardware Defined and Software Defined Network is sometimes hardly to understand

 Switches

10GBE SW01 10 GBit/s physical stacked or independed
10GBE SW02 10 GBit/s physical stacked or independed
SoftSW01 10 GBit/s Software defined / converged

 Neccessary Networks

Networkname VLAN IP Network (IPv4) Connected to Switch
Management 100 10.11.100.0/24 SoftSW01
Cluster 101 10.11.101.0/24  SoftSW01
Livemigration 450 10.11.45.0/24 SoftSW01
Storage 400 10.11.40.0/24 SoftSW01
Virtual Machines 200 – x 10.11.x.x/x SoftSW01

 Possible rearview Server


 Schematic representation


Switch Port Configuration


Bandwidth Configuration vNICs

vNIC min. Bandwidth Weight PowerShell Command
Management 10%
Cluster 5%
iSCSI 30%
Livemigration 30%

QoS Configuration Switch

Networkname Priority
Management medium
Cluster high
Storage high
Livemigration medium
VMs dependig on VM Workload

 

Hyper-V|W2k12R2|4x1GB|2x10GB|2xFC

Hyper-V Cluster Network configuration with following parameters:

The following configuration leverages 4x 1GB Ethernet, 2x 10GB Ethernet NICs and LOM (LAN on Motherboard) and 2x Fibre channel connections. The storage can be connected via Fibre Channel with MPIO. The configurations uses physical configuration and software defined / converged network for Hyper-V.


 Pro’s and Con’s of that solution

 Pro Con
– High Bandwidth for VM- Good Bandwidth for Storage
– Separated Management and Heartbeat Interface
– Full fault redundant
– Can be used in switch independent or LACP (with stacked switches) teaming mode
– Fibrechannel ist most common SAN technology
– Limited Bandwidth for Livemigration
– a lot of NICs and Switches needed
– a lot of technologies involved
– expensiv

 Switches

Switch name Bandwidth Switchtyp
1GBE SW01 1 GBit/s physical stacked or independed
1GBE SW02 1 GBit/s physical stacked or independed
10GBE SW01 10 GBit/s physical stacked or independed
10GBE SW02 10 GBit/s physical stacked or independed
FC SW01 4/8 GB FC/s physical stacked or independed
FC SW02 4/8 GB FC/s physical stacked or independed
SoftSW01 1 GBit/s Software defined / converged
SoftSW02 10 GBit/s  Software defined / converged

 Neccessary Networks

Networkname VLAN IP Network (IPv4) Connected to Switch
Management 100 10.11.100.0/24 SoftSW01
Cluster 101 10.11.101.0/24  SoftSW01
Livemigration 450 10.11.45.0/24  1GBE SW01 & 1GBE SW02
Virtual Machines 200 – x 10.11.x.x/x  SoftSW02

 Possible rearview Server

RearSvr02


 Schematic representation

NIC01

Switch Port Configuration

 sw1gbe  
 

Bandwidth Configuration vNICs

vNIC min. Bandwidth Weight PowerShell Command
Management 40%
Cluster 10%

QoS Configuration Switch

Networkname Priority
Management medium
Cluster high
Livemigration medium
VMs dependig on VM Workload