Hyper-V|W2k12R2|4x10GB

Hyper-V Cluster Network configuration with following parameters:

The following configuration leverages 4x 10GB Ethernet NICs and LOM (LAN on Motherboard). The storage can be connected via iSCSI with MPIO or SMB 3.x.x without RDMA. The configurations uses physical configuration and software defined / converged network for Hyper-V.


 Pro’s and Con’s of that solution

 Pro Con
– High Bandwidth for VM- Good Bandwidth for Storage
– Software Defined Network can be fully controlled from the Hypervisor
– Full fault redundant
– Can be used in switch independent or LACP (with stacked switches) teaming mode
– Only one Hardware Technology is used
– Becomes limited by a large number of VMs
– Network admins won’t like it
– Combination between Hardware Defined and Software Defined Network is sometimes hardly to understand

 Switches

10GBE SW01 10 GBit/s physical stacked or independed
10GBE SW02 10 GBit/s physical stacked or independed
SoftSW01 10 GBit/s Software defined / converged

 Neccessary Networks

Networkname VLAN IP Network (IPv4) Connected to Switch
Management 100 10.11.100.0/24 SoftSW01
Cluster 101 10.11.101.0/24  SoftSW01
Livemigration 450 10.11.45.0/24 SoftSW01
Storage 400 10.11.40.0/24 SoftSW01
Virtual Machines 200 – x 10.11.x.x/x SoftSW01

 Possible rearview Server


 Schematic representation


Switch Port Configuration


Bandwidth Configuration vNICs

vNIC min. Bandwidth Weight PowerShell Command
Management 10%
Cluster 5%
iSCSI 30%
Livemigration 30%

QoS Configuration Switch

Networkname Priority
Management medium
Cluster high
Storage high
Livemigration medium
VMs dependig on VM Workload

 

Hyper-V|W2k12R2|4x1GB|2x10GB|2xFC

Hyper-V Cluster Network configuration with following parameters:

The following configuration leverages 4x 1GB Ethernet, 2x 10GB Ethernet NICs and LOM (LAN on Motherboard) and 2x Fibre channel connections. The storage can be connected via Fibre Channel with MPIO. The configurations uses physical configuration and software defined / converged network for Hyper-V.


 Pro’s and Con’s of that solution

 Pro Con
– High Bandwidth for VM- Good Bandwidth for Storage
– Separated Management and Heartbeat Interface
– Full fault redundant
– Can be used in switch independent or LACP (with stacked switches) teaming mode
– Fibrechannel ist most common SAN technology
– Limited Bandwidth for Livemigration
– a lot of NICs and Switches needed
– a lot of technologies involved
– expensiv

 Switches

Switch name Bandwidth Switchtyp
1GBE SW01 1 GBit/s physical stacked or independed
1GBE SW02 1 GBit/s physical stacked or independed
10GBE SW01 10 GBit/s physical stacked or independed
10GBE SW02 10 GBit/s physical stacked or independed
FC SW01 4/8 GB FC/s physical stacked or independed
FC SW02 4/8 GB FC/s physical stacked or independed
SoftSW01 1 GBit/s Software defined / converged
SoftSW02 10 GBit/s  Software defined / converged

 Neccessary Networks

Networkname VLAN IP Network (IPv4) Connected to Switch
Management 100 10.11.100.0/24 SoftSW01
Cluster 101 10.11.101.0/24  SoftSW01
Livemigration 450 10.11.45.0/24  1GBE SW01 & 1GBE SW02
Virtual Machines 200 – x 10.11.x.x/x  SoftSW02

 Possible rearview Server

RearSvr02


 Schematic representation

NIC01

Switch Port Configuration

 sw1gbe  
 

Bandwidth Configuration vNICs

vNIC min. Bandwidth Weight PowerShell Command
Management 40%
Cluster 10%

QoS Configuration Switch

Networkname Priority
Management medium
Cluster high
Livemigration medium
VMs dependig on VM Workload

 

HyperV|W2k12R2|4x1GB|4x10GB

Hyper-V Cluster Network configuration with following parameters:

The following configuration leverages 4x 1GB Ethernet and 4x 10GB Ethernet NICs and LOM (LAN on Motherboard). The storage can be connected via iSCSI with MPIO or SMB 3.x.x without RDMA. The configurations uses physical configuration and software defined / converged network for Hyper-V.


 Pro’s and Con’s of that solution

 Pro Con
– High Bandwidth for VM- Good Bandwidth for Storage
– Separated Management and Heartbeat Interface
– Full fault redundant
– Can be used in switch independent or LACP (with stacked switches) teaming mode
– Limited Bandwidth for Livemigration
– a lot of NICs and Switches needed

 Switches

Switch name Bandwidth Switchtyp
1GBE SW01 1 GBit/s physical stacked or independed
1GBE SW02 1 GBit/s physical stacked or independed
10GBE SW01 10 GBit/s physical stacked or independed
10GBE SW02 10 GBit/s physical stacked or independed
SoftSW01 1 GBit/s Software defined / converged
SoftSW02 10 GBit/s  Software defined / converged

 Neccessary Networks

Networkname VLAN IP Network (IPv4) Connected to Switch
Management 100 10.11.100.0/24 SoftSW01
Cluster 101 10.11.101.0/24  SoftSW01
Livemigration 450 10.11.45.0/24  1GBE SW01 & 1GBE SW02
Storage 400 10.11.40.0/24  10GBE SW01 & 10GBE SW02
Virtual Machines 200 – x 10.11.x.x/x  SoftSW02

 Possible rearview Server

RearSvr01


 Schematic representation

NIC01 NIC02

Switch Port Configuration

 sw1gbe sw10gbe

Bandwidth Configuration vNICs

vNIC min. Bandwidth Weight PowerShell Command
Management 40%
Cluster 10%

QoS Configuration Switch

Networkname Priority
Management medium
Cluster high
Storage high
Livemigration medium
VMs dependig on VM Workload

 

New Microsoft virtual Academy Course – Failover Clustering in Windows Server 2012 R2 Hyper-V

Yesterday Microsoft launched a new Acadamy Course to explain Failover Clustering in Windows 2012 R2 Hyper-V.

This full day of training includes the following modules:

  1. Introduction to Failover Clustering
  2. Cluster Deployment and Upgrades
  3. Cluster Networking
  4. Cluster Storage & Scale-Out File Server
  5. Hyper-V Clustering
  6. Multi-Site Clustering & Scale-Out File Server
  7. Advanced Cluster Administration & Troubleshooting
  8. Managing Clusters with System Center 2012 R2

You can find the course here: http://www.microsoftvirtualacademy.com/training-courses/failover-clustering-in-windows-server-2012-r2

Failover Clustermanager Cluster Network – Network Interface only shows IPv6 Localloop

The todays post is inspired by some questions I often see when look around in forum for microsoft clustering.

Many people who are using Microsoft Failover Cluster Manager the first time, encounter that they see an unnamed cluster network with an IPv6 Loopback adresse.

22-05-_2015_19-21-36

 

Often they see it as a failure in Cluster manager. That is not the case. Mostly the reason is very simple. You have a Networkcard in your system which has no IPv4 or any IP at all.

22-05-_2015_19-45-36

Now you need to find why the reason why you don’t got an IP. The most common reasons are:

– you have no DHCP Server in that network and no static IP configured
– you get no IP from the DHCP server because of wrong VLAN or Port is not cabled correctly

So the easiest and most common issue is, that you forgot to set the IP. So set the IP for that networkcard.

22-05-_2015_19-48-18

After you set the IP your Failover Cluster Manager sees the new IP and everything looks fine 🙂

22-05-_2015_19-49-37