Why you should plan your storage structure when working with clusters Part#1 Single Storage / Single Cluster

Hi everybody,

today I want to talk a bit about “why you should think about your storage structure when planning a cluster”.

Now most of you think “hey why thinking, I ask the storage guy to create a new LUN f√ľr me and thats it.”

Sorry guy’s theres the mistake. What if the storage guy provisions your new LUN on the same diskgroup like you others or on a full storage? What if the diskgroup or storage failes?

Within this post, you will get some of my personal best practices how to provison LUNs and cluster share volumes on Storages and diskgroups.

Let us start with an easy one.


What you should do is to create two disk groups. Yes you will lose some diskspace depending on the raid level but we are talking about redudancy and to minimize service outages. So a lose of diskspace shouldn’t be a problem.


Now you need to decide which RAID you want to use on your disk pool. Here it depends from storage to storage if you have SSD Disks as level 0 cache which most enterprises use, you can decide to use RAID 5 to enhance your capacity otherwise you should use RAID 10 to encrease your IO performance. NEVER use RAID 0!!!!

For the best storage capacity to perfomance relation for your storage, please talk to your storage vendor. He can tell you ūüôā


Now you decide you LUNs that will provisioned on the Storage. Here you should use a designe which is logical for you. For me it depends on the cluster service I run. I will show you one of the most common and understandable ones.


At least comes the Windows Server Cluster Magic. Dependig on the cluster service you are running, you should now deploy the services and rolles on the different cluster shared volumes.

I will try to show it with the example Scale Out Fileserver and Hyper-V.

 Microsoft Hyper-V  Microsoft Scale Out Fileserver
 hyper-V Place
 SoFS Place

So that’s all for today, I hope that blogs helps you out a bit.


Scale out Fileserver Cluster Network configuration with following parameters:

The following configuration leverages 2x 1GB Ethernet, 4x 10GB Ethernet NICs and LOM (LAN on Motherboard) and 2x Fibre channel connections. There are 4 NICs useable for SMB 3.x.x Multichannel.

The storage is provisioned to the Scale out Fileserver Hosts via Fibrechannel.

¬†Pro’s and Con’s of that solution

 Pro Con
– High Bandwidth
– Full fault redundant
– Fibrechannel ist most common SAN technology
– many NICs
– a lot of NICs and Switches needed
– a lot of technologies involved
– no separed Team for Cluster and Management
– expensiv


Switch name Bandwidth Switchtyp
1GBE SW01 1 GBit/s physical stacked or independed
1GBE SW02 1 GBit/s physical stacked or independed
10GBE SW01 10 GBit/s physical stacked or independed
10GBE SW02 10 GBit/s physical stacked or independed
FC SW01 4/8 GB FC/s physical stacked or independed
FC SW02 4/8 GB FC/s physical stacked or independed

 Neccessary Networks

Networkname VLAN IP Network (IPv4) Connected to Switch
Management 100  1GBE SW01 & 1GBE SW02 (via NIC Team & VLAN Tagging)
Cluster 101  1GBE SW01 & 1GBE SW02 (via NIC Team & VLAN Tagging)
SMB 01 200  10GBE SW01
SMB 02 200 10GBE SW02
SMB 03 200 10GBE SW01
SMB 04 200 10GBE SW02
FC 01 FC SW01
FC 01 FC SW01

 Possible rearview Server

 Schematic representation

Switch Port Configuration


QoS Configuration Switch

Networkname Priority
Management medium
Cluster high
SMB Traffic High


Blogpost about Scale out Fileserver @Elanity Technik Blogs

Hey all,

for that guy’s who speak german. I published a blogpost about Scale ot Fileserver at the German Technik Blog of Elanity Network Partner GmbH.

Just click http://www.elanity.de/technikblog/scale-out-fileserver. ūüôā



White paper – Building High Performance Storage for Hyper-V Cluster on Scale-Out

A few days ago, Microsoft published a whitepaper how to build High Performance Storage for Hyper-V Cluster on Scale-Out File Servers using Violin Windows Flash Arrays.

To take a look on, please click here.

Clustered Storage Spaces on Dell JBODs Introduction

This information were collected together with my old team @Dell TechCenter and Dell Engineerin and one of my best friends Carsten Rachfahl – MVP Hyper-V.

Most of you know Storage Spaces already and you know how to use them for Scale out File Server but in the past there were some ‚Äúissues‚ÄĚ we have to fight with.

For all of you who need a recap for Storage Spaces, Scale out File Server and those stuff. I would highly suggest you Jose Barretos blog (Principal Program Manager at Microsoft).

One was, that you need SCSI Endclosure Service 3.x on your JBOD or direct attached SAS storage enclosures. There are a bunch of small vendors like dataon, who use this version in their systems but many of us have a preferred vendor like HP, EMC, NetApp, IBM or Dell.

Most of them do not support this SES version. Dell came up a few weeks ago with it’s first solution that supports SES 3.x and Storage Spaces. With the second they will come up in Q2 2014 with SES 3.x support.

First solution is the Dell PowerVault MD1200 and Dell PowerVault MD1220 together with the SAS 6 Gbit/sec HBA.

For Dell PV MD1200 Hardware Specs, you can find more details here.

For Dell PV MD1220 Hardware Specs, you can find more details here.

For Dell SAS 6 Gbit/Sec HBA, you can find more details here.

To see the supported designes please check out the guides on Dell Tech Center.


Dell MD 12x0

Dell PowerVault MD 1200 (above) & MD 1220 (below) Source: www.dell.de


The second combination could be the Dell PowerVault MD3060e (Codename Roadking) together with LSI 9206-16e. There are no official information yet.

For Dell PV MD3060e Hardware Specs, you can find more details here.

For LSI 9206-16e HBA, you can find more details here.

Dell PowerVault MD3060e (Roadking)