Removed Cluster Node still shows it’s self as a cluster member

Hi guy’s,

one thing some of you maybe notice from time to time. When you evict a node from a cluster it can happen that the cluster node it self says it belongs still to a cluster and your not able to force it into a new one or use the node as independent server.

2015-08-08_20-15-37

 

The reason for that is quite simple. There are some points which are configured in a AD Computer Account and DNS for a Cluster Node. Sometimes it happens, that not all attributes are deleted during evicting the node. Most likely it is the following attribute.

2015-08-08_20-14-47

 

So now there are three way’s to solve the issue:

1. Remove the the failover clutser feature from your node, reboot and reinstall it if needed. That fixes the issue in 80% of all cases (in my personal experience) .

2015-08-08_20-33-10

 

2. Remove the cluster node from active directory, delete the computer objekt and rejoin the node. That work in 100% of all cases because you have a totally new computer object and GUID with no old stuff in.

2015-08-08_20-33-52

3. Or for the guy’s and girls who love some pain. Search your AD Computer Attributes and DNS for all cluster entries where the fault node is still in and edit the entries. I wouldn’t suggest it because it is very risky and takes very long time.

 

How to move a cluster group & ressource in Microsoft Failover Cluster

Today I want to show you two ways to move the cluster ressource group and witness to another owner within the cluster. There are some scenarios where that could become necessary e.g. a planned maintenance.

To move the cluster resource is not that easy like moving a cluster shared volume. There is no option “move to best possible node” for the witness.

Wit01

Witness Options

2015-08-06_12-03-49

Cluster shared Volume options

So now there are two ways to move the Witness and cluster resources.

1. PowerShell

One you cluster node you run following command

 

2. Failover Cluster Manager Interface

For that you click on your cluster name and navigate on the right action panel to “More Actions”. There you have “Move Core Cluster Ressources”

2015-08-06_11-12-56

Using shared VHDx for guest clustering on a storage with no support for Microsoft Cluster & virtualization

Hi everybody,

last weekend my friend Udo Walberer and I had some struggles when setting up a guest cluster with shared VHDx as cluster volumes. We weren’t able to get it work.

We were able to connect the VHDx and to configure it as cluster volume but after a reboot the VMs weren’t starting and we get following failure:

svhdx

 

So after some research wie figured out that the storage which he was using had no support for Microsoft Clustering and Virtualisation as well as Persistent Reservation. After we tested on a supported storage all was fine.

So please before you try to use shared VHDx, check if you storage is supported for some options.

The two screenshots below show you two synology storages, first one without those support and second with support.

25-07-_2015_21-02-41

25-07-_2015_21-03-02

 

 

Differences in Cluster Traffic with different workloads

Hi all,

last week I was at a customer and I had the chance to do some screenshots from two cluster which show how many heartbeat traffic a cluster can produce.

1. A two node cluster with four virtual machines. There you have a great heartbeat with 500 Kbit/s on the network. So pretty low.

Traffic01

 

2. A six node cluster with 215 virtual machines. Without any external action the cluster heartbeat has a continues traffic from around 60 MBit/s. I started some livemigrations and the heartbeat partly jumps up to 150 MBit/s.

Traffic02

 

So my conlusion, the heartbeat traffic multiplies with the load and number of nodes within the cluster.

 

 

Hyper-V|W2k12R2|8x1GB|2x10GB

Hyper-V Cluster Network configuration with following parameters:

The following configuration leverages 8x 1GB Ethernet NICs and LOM (LAN on Motherboard) and 2x 10GB Ethernet NICs. The storage can be connected via iSCSI with MPIO or SMB 3.x.x without RDMA. The configurations uses physical configuration and software defined / converged network for Hyper-V.


 Pro’s and Con’s of that solution

 Pro Con
– Good Bandwidth for VM
– Good Bandwidth for Storage
– Separated NICs for Livemigration, Cluster and Management
– Full fault redundant
– Can be used in switch independent or LACP (with stacked switches) teaming mode
– Only one Hardware Technology is used
– Network becomes limited by a large number of VMs
– Combination between Hardware Defined and Software Defined Network is sometimes hardly to understand

 Switches

1GBE SW01 1 GBit/s physical stacked or independed
1GBE SW02 1 GBit/s physical stacked or independed
10GBE SW01 10 GBit/s physical stacked or independed
10GBE SW02 10 GBit/s physical stacked or independed
SoftSW01 10 GBit/s Software defined / converged
SoftSW02 10 GBit/s Software defined / converged

 Neccessary Networks

Networkname VLAN IP Network (IPv4) Connected to Switch
Management 10 10.11.10.0/24 SoftSW01
Cluster 11 10.11.11.0/24  SoftSW01
Livemigration 45 10.11.45.0/24 1GBE SW01 / 1GBE SW02
With iSCSI – Storage 40 10.11.40.0/24 10GBE SW01 / 10GBE SW02
With SMB – Storage 5051 10.11.50.0/2410.11.51.0/24 10GBE SW0110GBE SW02
Virtual Machines 200 – x 10.11.x.x/x 1GBE SW01 / 1GBE SW02

 Possible rearview Server


 Schematic representation


Switch Port Configuration

   

Bandwidth Configuration vNICs

vNIC min. Bandwidth Weight PowerShell Command
Management 10%
Cluster 5%

QoS Configuration Switch

Networkname Priority
Management medium
Cluster high
Storage high
Livemigration medium
VMs dependig on VM Workload