Keep your files and volumes available with Windows Server 2016 and Microsoft Azure

Hi Community,

after having a great time at the MVP Summit and finishing chapter six of my first book. I wanted to turn some more effort into my blog again. The first thing I wanted to talk about is a scenario to expand your Windows Server 2016 Storage into the cloud and keep it available over branches.

The Scenario and use cases

Sometimes I have customers who need to have caches for fileservers and storage within their branch offices. In the past I needed expensive storage devices, complex software or I used DFS-R to transfer files.

With Windows Server 2016 we got Storage Replication, which gave me new opportunities to think about.

First Scenario I tried and built was with Windows Server 2016 based fileservers to replace DFS-R and establish asynchrony replication based on byte and not file level to reduce traffic etc.

You can use this kind of replication to move for file or backup data into the cloud.

The Technologies

What technologies did I use.

Windows Server 2016 Storage replication:

You can use Storage Replica to configure two servers to sync data so that each has an identical copy of the same volume. This topic provides some background of this server-to-server replication configuration, as well as how to set it up and manage the environment.

Source: https://technet.microsoft.com/en-us/windows-server-docs/storage/storage-replica/server-to-server-storage-replication

Microsoft Azure VMs https://azure.microsoft.com/en-us/documentation/services/virtual-machines/windows/ 

The Infrastructure Architecture

In the first place you need a fileserver as source and fileserver as target. You need also to ensure that the data you want to replicate are on a different volume than the data that stays onsite.

1

The source can either run on Hardware or which would be the most cost efficient way on Windows Server 2016 Hyper-V Cluster together with other virtual machines like Domain Controller, Backupserver, Webserver or Database. With this kind of cluster you would also save the license costs for the Fileserver Datacenter License because you can use the Host License with AVMA and you can leverage the Windows Server 2016 License Mobility to Azure. Which enables you to use your Windows Server License for Azure virtual machines.

2

 

The Azure virtual machine should be a DS3 or above, because you need at least two disks. If you want to replicate more disks, you should be able to add more disks.

3

 

From the Network Site you need to implement VPN Site to Site Connection between your offices and Azure. You need a performance gateway to get the necessary throughput and latency. I would recommend to use Microsoft ExpressRoute and MPLS.

4

Scenarios how to use those fileserver Volumes

The first scenario I tested so far, was to get a geo redundant standby system for a fileserver with profile data and shares. Both Servers do not run in a cluster (didn’t try that yet). Both servers are part of an DFS-N. The on premises server is the primary DFS-N target for the clients. The fileserver in Azure is the secondary target. The secondary fileserver is disabled as target for Clients in DFS-N. The  access will be on the primary fileserver and the storage information will be replicated to secondary fileserver.

5

 

As long as everything went fine, you have only incoming traffic to Azure with no costs for traffic. If the primary volume or fileserver went offline you switch to the secondary fileserver by enabling the secondary fileserver in DFS-N and swapping the target volume to active. You can either do this manually or trigger it via automation services and monitoring e.g. Azure Automation and Operations Management Suite or System Center Operations Manager.

6

You can also use the fileserver as target for different fileservers.

7

 

A different approached could be achieved when using this scenario for backup. First you backup your data to the primary fileshare or volume and replicate it to the cloud.

8

After you finished the backup you switch the volume and transfer the backup to a cheaper location e.g. Azure Cold Storage Accounts.

9

The Pro’s and Con’s

Pro Con
Easy to use ExpressRoute needed for best performance
Azure License for Azure VM might be covered by your on premises license Not documented yet and only Proof of concept
No need for expensive Storage Systems
Great to replicate File Data and Backups into the cloud

 

How to plan redundancy for Scale out Fileser

Hey everybody,

after I posted some of my thoughts I normally put behind Hyper-V redundancy, today I want to show you some examples how you could plan redundancy for Scale out Fileserver.

When to choose a redundancy where only one or two cluster nodes can fail?

That is the most common and easiest why for node redundancy in a cluster. It means you have enough nodes in your cluster to cover one or two node failures. You would choose that cluster config when all of your nodes are in one datacenter or server room and you need no geo-redundant storage solution. Please notice, for a JBOD based Scale out Filserver you need at least a minimum of three JBODs. For converged Scale out Fileserver with Windows Server 2016 you will need 4 equal Scale out Fileserver Systems.

Sofs01

Traditional Scale out Fileserver with Storage Spaces and JBODs

sofs02

Traditional Scale out Fileserver with SAN Storage Backend

sofs03

Scale out Fileserver with Storage Spaces Direct in Windows Server 2016

When to choose a redundancy where you can choose half of the nodes?

In this scenario you can lose one half of your nodes but you need to fulfill some more requirements like storage replications or direct WAN links. You would normally use if you want to keep your services alive if one datacenter or serverroom fails.

sofs04

With Storage Spaces Direct in Windows Server 2016 and RDMA RoCE

sofs05

Scale out Fileserver with classic SAN storage replication

Using shared VHDx for guest clustering on a storage with no support for Microsoft Cluster & virtualization

Hi everybody,

last weekend my friend Udo Walberer and I had some struggles when setting up a guest cluster with shared VHDx as cluster volumes. We weren’t able to get it work.

We were able to connect the VHDx and to configure it as cluster volume but after a reboot the VMs weren’t starting and we get following failure:

svhdx

 

So after some research wie figured out that the storage which he was using had no support for Microsoft Clustering and Virtualisation as well as Persistent Reservation. After we tested on a supported storage all was fine.

So please before you try to use shared VHDx, check if you storage is supported for some options.

The two screenshots below show you two synology storages, first one without those support and second with support.

25-07-_2015_21-02-41

25-07-_2015_21-03-02

 

 

Why you should plan your storage structure when working with clusters Part#1 Single Storage / Single Cluster

Hi everybody,

today I want to talk a bit about “why you should think about your storage structure when planning a cluster”.

Now most of you think “hey why thinking, I ask the storage guy to create a new LUN für me and thats it.”

Sorry guy’s theres the mistake. What if the storage guy provisions your new LUN on the same diskgroup like you others or on a full storage? What if the diskgroup or storage failes?

Within this post, you will get some of my personal best practices how to provison LUNs and cluster share volumes on Storages and diskgroups.

Let us start with an easy one.

 

What you should do is to create two disk groups. Yes you will lose some diskspace depending on the raid level but we are talking about redudancy and to minimize service outages. So a lose of diskspace shouldn’t be a problem.

Diskpool

Now you need to decide which RAID you want to use on your disk pool. Here it depends from storage to storage if you have SSD Disks as level 0 cache which most enterprises use, you can decide to use RAID 5 to enhance your capacity otherwise you should use RAID 10 to encrease your IO performance. NEVER use RAID 0!!!!

For the best storage capacity to perfomance relation for your storage, please talk to your storage vendor. He can tell you 🙂

Diskgroup

Now you decide you LUNs that will provisioned on the Storage. Here you should use a designe which is logical for you. For me it depends on the cluster service I run. I will show you one of the most common and understandable ones.

LUN

At least comes the Windows Server Cluster Magic. Dependig on the cluster service you are running, you should now deploy the services and rolles on the different cluster shared volumes.

I will try to show it with the example Scale Out Fileserver and Hyper-V.

 Microsoft Hyper-V  Microsoft Scale Out Fileserver
 hyper-V Place
 SoFS Place

So that’s all for today, I hope that blogs helps you out a bit.

SoFS|W2k12R2|2x1GB|4x10GB|4xFC

Scale out Fileserver Cluster Network configuration with following parameters:

The following configuration leverages 2x 1GB Ethernet, 4x 10GB Ethernet NICs and LOM (LAN on Motherboard) and 2x Fibre channel connections. There are 4 NICs useable for SMB 3.x.x Multichannel.

The storage is provisioned to the Scale out Fileserver Hosts via Fibrechannel.


 Pro’s and Con’s of that solution

 Pro Con
– High Bandwidth
– Full fault redundant
– Fibrechannel ist most common SAN technology
– many NICs
– a lot of NICs and Switches needed
– a lot of technologies involved
– no separed Team for Cluster and Management
– expensiv

 Switches

Switch name Bandwidth Switchtyp
1GBE SW01 1 GBit/s physical stacked or independed
1GBE SW02 1 GBit/s physical stacked or independed
10GBE SW01 10 GBit/s physical stacked or independed
10GBE SW02 10 GBit/s physical stacked or independed
FC SW01 4/8 GB FC/s physical stacked or independed
FC SW02 4/8 GB FC/s physical stacked or independed

 Neccessary Networks

Networkname VLAN IP Network (IPv4) Connected to Switch
Management 100 10.11.100.0/24  1GBE SW01 & 1GBE SW02 (via NIC Team & VLAN Tagging)
Cluster 101 10.11.101.0/24  1GBE SW01 & 1GBE SW02 (via NIC Team & VLAN Tagging)
SMB 01 200 10.11.45.0/24  10GBE SW01
SMB 02 200 10.11.46.0/24 10GBE SW02
SMB 03 200 10.11.47.0/24 10GBE SW01
SMB 04 200 10.11.48.0/24 10GBE SW02
FC 01 FC SW01
FC 01 FC SW01

 Possible rearview Server


 Schematic representation


Switch Port Configuration

   
 

QoS Configuration Switch

Networkname Priority
Management medium
Cluster high
SMB Traffic High