Removed Cluster Node still shows it’s self as a cluster member

Hi guy’s,

one thing some of you maybe notice from time to time. When you evict a node from a cluster it can happen that the cluster node it self says it belongs still to a cluster and your not able to force it into a new one or use the node as independent server.



The reason for that is quite simple. There are some points which are configured in a AD Computer Account and DNS for a Cluster Node. Sometimes it happens, that not all attributes are deleted during evicting the node. Most likely it is the following attribute.



So now there are three way’s to solve the issue:

1. Remove the the failover clutser feature from your node, reboot and reinstall it if needed. That fixes the issue in 80% of all cases (in my personal experience) .



2. Remove the cluster node from active directory, delete the computer objekt and rejoin the node. That work in 100% of all cases because you have a totally new computer object and GUID with no old stuff in.


3. Or for the guy’s and girls who love some pain. Search your AD Computer Attributes and DNS for all cluster entries where the fault node is still in and edit the entries. I wouldn’t suggest it because it is very risky and takes very long time.


How to move a cluster group & ressource in Microsoft Failover Cluster

Today I want to show you two ways to move the cluster ressource group and witness to another owner within the cluster. There are some scenarios where that could become necessary e.g. a planned maintenance.

To move the cluster resource is not that easy like moving a cluster shared volume. There is no option “move to best possible node” for the witness.


Witness Options


Cluster shared Volume options

So now there are two ways to move the Witness and cluster resources.

1. PowerShell

One you cluster node you run following command


2. Failover Cluster Manager Interface

For that you click on your cluster name and navigate on the right action panel to “More Actions”. There you have “Move Core Cluster Ressources”


Using shared VHDx for guest clustering on a storage with no support for Microsoft Cluster & virtualization

Hi everybody,

last weekend my friend Udo Walberer and I had some struggles when setting up a guest cluster with shared VHDx as cluster volumes. We weren’t able to get it work.

We were able to connect the VHDx and to configure it as cluster volume but after a reboot the VMs weren’t starting and we get following failure:



So after some research wie figured out that the storage which he was using had no support for Microsoft Clustering and Virtualisation as well as Persistent Reservation. After we tested on a supported storage all was fine.

So please before you try to use shared VHDx, check if you storage is supported for some options.

The two screenshots below show you two synology storages, first one without those support and second with support.





Differences in Cluster Traffic with different workloads

Hi all,

last week I was at a customer and I had the chance to do some screenshots from two cluster which show how many heartbeat traffic a cluster can produce.

1. A two node cluster with four virtual machines. There you have a great heartbeat with 500 Kbit/s on the network. So pretty low.



2. A six node cluster with 215 virtual machines. Without any external action the cluster heartbeat has a continues traffic from around 60 MBit/s. I started some livemigrations and the heartbeat partly jumps up to 150 MBit/s.



So my conlusion, the heartbeat traffic multiplies with the load and number of nodes within the cluster.




Hyper-V Cluster Network configuration with following parameters:

The following configuration leverages 8x 1GB Ethernet NICs and LOM (LAN on Motherboard) and 2x 10GB Ethernet NICs. The storage can be connected via iSCSI with MPIO or SMB 3.x.x without RDMA. The configurations uses physical configuration and software defined / converged network for Hyper-V.

 Pro’s and Con’s of that solution

 Pro Con
– Good Bandwidth for VM
– Good Bandwidth for Storage
– Separated NICs for Livemigration, Cluster and Management
– Full fault redundant
– Can be used in switch independent or LACP (with stacked switches) teaming mode
– Only one Hardware Technology is used
– Network becomes limited by a large number of VMs
– Combination between Hardware Defined and Software Defined Network is sometimes hardly to understand


1GBE SW01 1 GBit/s physical stacked or independed
1GBE SW02 1 GBit/s physical stacked or independed
10GBE SW01 10 GBit/s physical stacked or independed
10GBE SW02 10 GBit/s physical stacked or independed
SoftSW01 10 GBit/s Software defined / converged
SoftSW02 10 GBit/s Software defined / converged

 Neccessary Networks

Networkname VLAN IP Network (IPv4) Connected to Switch
Management 10 SoftSW01
Cluster 11  SoftSW01
Livemigration 45 1GBE SW01 / 1GBE SW02
With iSCSI – Storage 40 10GBE SW01 / 10GBE SW02
With SMB – Storage 5051 10GBE SW0110GBE SW02
Virtual Machines 200 – x 10.11.x.x/x 1GBE SW01 / 1GBE SW02

 Possible rearview Server

 Schematic representation

Switch Port Configuration


Bandwidth Configuration vNICs

vNIC min. Bandwidth Weight PowerShell Command
Management 10%
Cluster 5%

QoS Configuration Switch

Networkname Priority
Management medium
Cluster high
Storage high
Livemigration medium
VMs dependig on VM Workload