Scale out Fileserver Cluster Network configuration with following parameters:

The following configuration leverages 2x 1GB Ethernet, 4x 10GB Ethernet NICs and LOM (LAN on Motherboard) and 2x Fibre channel connections. There are 4 NICs useable for SMB 3.x.x Multichannel.

The storage is provisioned to the Scale out Fileserver Hosts via Fibrechannel.

 Pro’s and Con’s of that solution

 Pro Con
– High Bandwidth
– Full fault redundant
– Fibrechannel ist most common SAN technology
– many NICs
– a lot of NICs and Switches needed
– a lot of technologies involved
– no separed Team for Cluster and Management
– expensiv


Switch name Bandwidth Switchtyp
1GBE SW01 1 GBit/s physical stacked or independed
1GBE SW02 1 GBit/s physical stacked or independed
10GBE SW01 10 GBit/s physical stacked or independed
10GBE SW02 10 GBit/s physical stacked or independed
FC SW01 4/8 GB FC/s physical stacked or independed
FC SW02 4/8 GB FC/s physical stacked or independed

 Neccessary Networks

Networkname VLAN IP Network (IPv4) Connected to Switch
Management 100  1GBE SW01 & 1GBE SW02 (via NIC Team & VLAN Tagging)
Cluster 101  1GBE SW01 & 1GBE SW02 (via NIC Team & VLAN Tagging)
SMB 01 200  10GBE SW01
SMB 02 200 10GBE SW02
SMB 03 200 10GBE SW01
SMB 04 200 10GBE SW02
FC 01 FC SW01
FC 01 FC SW01

 Possible rearview Server

 Schematic representation

Switch Port Configuration


QoS Configuration Switch

Networkname Priority
Management medium
Cluster high
SMB Traffic High


Blogpost about Scale out Fileserver @Elanity Technik Blogs

Hey all,

for that guy’s who speak german. I published a blogpost about Scale ot Fileserver at the German Technik Blog of Elanity Network Partner GmbH.

Just click 🙂



How a Microsoft Failover Cluster works

During my daily work in the field I often have to explain how a Failover cluster in the Microsoft world is working. Currently I mostly discribe three kinds of clusters. First is the Clustertyp which is used for DHCP Failover Cluster, like discribed in a former blogpost. The other is a DAG or Database availability group, how it is used in Exchange and Failover Cluster how it works in Scale out Fileserver and Hyper-V Cluster.

Today I want to focus on the Fileserver and Hyper-V Failover Cluster and try to explain it.

At first, you need to know such a failover cluster must have an uneven number of cluster members.

Why do we need an uneven number of cluster member? The reason for is a thing named split brain.

Source wikipedia:

Split-brain is a term in computer jargon, based on an analogy with the medical Split-brain syndrome. It indicates data or availability inconsistencies originating from the maintenance of two separate data sets with overlap in scope, either because of servers in a network design, or a failure condition based on servers not communicating and synchronizing their data to each other. This last case is also commonly referred to as a network partition.

An uneven number of cluster members? But dies this mean I can only have 1,3,5, … etc. servers in my cluster?

Normally yes but hear comes the Magic. The uneven  member do not need to be a Server, it could also be a share or LUN. This share is named Cluster Shared Volume (CSV) or cluster witness. Every Server and the cluster witness have the same wight in the cluster.


Now one server catches the witness and becomes the “clustermaster” and has more votes in the cluster. He will give the direction for the other nodes.


What happens if one cluster when the cluster master files? Very easy, together with the cluster master, the cluster shared volume fails too. That means at the end we will have an uneven number of cluster nodes and there won’t be a split brain.



If a cluster member failes, the witness will be disconnected and deaktivated. That will also bring an uneven number of cluster nodes and we won’t run in split brain.



Now you will ask, I have more than two nodes. What will happen if the connection splits the cluster in two halfes? which means it would force the split brain issue again!


The cluster member will notice their numbers on both sides. Than the cluster will go and run on the uneven cluster half.




Clustered Storage Spaces on Dell JBODs Introduction

This information were collected together with my old team @Dell TechCenter and Dell Engineerin and one of my best friends Carsten Rachfahl – MVP Hyper-V.

Most of you know Storage Spaces already and you know how to use them for Scale out File Server but in the past there were some “issues” we have to fight with.

For all of you who need a recap for Storage Spaces, Scale out File Server and those stuff. I would highly suggest you Jose Barretos blog (Principal Program Manager at Microsoft).

One was, that you need SCSI Endclosure Service 3.x on your JBOD or direct attached SAS storage enclosures. There are a bunch of small vendors like dataon, who use this version in their systems but many of us have a preferred vendor like HP, EMC, NetApp, IBM or Dell.

Most of them do not support this SES version. Dell came up a few weeks ago with it’s first solution that supports SES 3.x and Storage Spaces. With the second they will come up in Q2 2014 with SES 3.x support.

First solution is the Dell PowerVault MD1200 and Dell PowerVault MD1220 together with the SAS 6 Gbit/sec HBA.

For Dell PV MD1200 Hardware Specs, you can find more details here.

For Dell PV MD1220 Hardware Specs, you can find more details here.

For Dell SAS 6 Gbit/Sec HBA, you can find more details here.

To see the supported designes please check out the guides on Dell Tech Center.


Dell MD 12x0

Dell PowerVault MD 1200 (above) & MD 1220 (below) Source:


The second combination could be the Dell PowerVault MD3060e (Codename Roadking) together with LSI 9206-16e. There are no official information yet.

For Dell PV MD3060e Hardware Specs, you can find more details here.

For LSI 9206-16e HBA, you can find more details here.

Dell PowerVault MD3060e (Roadking)


Deploying Windows Server 2012 R2 Storage Spaces on Dell PowerVault

Dell recently published some solution guides about “how to deploy Windows Server 2012 R2 Storage Spaces on Dell PowerVault”.

You can download the document here.