Keep your files and volumes available with Windows Server 2016 and Microsoft Azure

Hi Community,

after having a great time at the MVP Summit and finishing chapter six of my first book. I wanted to turn some more effort into my blog again. The first thing I wanted to talk about is a scenario to expand your Windows Server 2016 Storage into the cloud and keep it available over branches.

The Scenario and use cases

Sometimes I have customers who need to have caches for fileservers and storage within their branch offices. In the past I needed expensive storage devices, complex software or I used DFS-R to transfer files.

With Windows Server 2016 we got Storage Replication, which gave me new opportunities to think about.

First Scenario I tried and built was with Windows Server 2016 based fileservers to replace DFS-R and establish asynchrony replication based on byte and not file level to reduce traffic etc.

You can use this kind of replication to move for file or backup data into the cloud.

The Technologies

What technologies did I use.

Windows Server 2016 Storage replication:

You can use Storage Replica to configure two servers to sync data so that each has an identical copy of the same volume. This topic provides some background of this server-to-server replication configuration, as well as how to set it up and manage the environment.

Source: https://technet.microsoft.com/en-us/windows-server-docs/storage/storage-replica/server-to-server-storage-replication

Microsoft Azure VMs https://azure.microsoft.com/en-us/documentation/services/virtual-machines/windows/ 

The Infrastructure Architecture

In the first place you need a fileserver as source and fileserver as target. You need also to ensure that the data you want to replicate are on a different volume than the data that stays onsite.

1

The source can either run on Hardware or which would be the most cost efficient way on Windows Server 2016 Hyper-V Cluster together with other virtual machines like Domain Controller, Backupserver, Webserver or Database. With this kind of cluster you would also save the license costs for the Fileserver Datacenter License because you can use the Host License with AVMA and you can leverage the Windows Server 2016 License Mobility to Azure. Which enables you to use your Windows Server License for Azure virtual machines.

2

 

The Azure virtual machine should be a DS3 or above, because you need at least two disks. If you want to replicate more disks, you should be able to add more disks.

3

 

From the Network Site you need to implement VPN Site to Site Connection between your offices and Azure. You need a performance gateway to get the necessary throughput and latency. I would recommend to use Microsoft ExpressRoute and MPLS.

4

Scenarios how to use those fileserver Volumes

The first scenario I tested so far, was to get a geo redundant standby system for a fileserver with profile data and shares. Both Servers do not run in a cluster (didn’t try that yet). Both servers are part of an DFS-N. The on premises server is the primary DFS-N target for the clients. The fileserver in Azure is the secondary target. The secondary fileserver is disabled as target for Clients in DFS-N. The  access will be on the primary fileserver and the storage information will be replicated to secondary fileserver.

5

 

As long as everything went fine, you have only incoming traffic to Azure with no costs for traffic. If the primary volume or fileserver went offline you switch to the secondary fileserver by enabling the secondary fileserver in DFS-N and swapping the target volume to active. You can either do this manually or trigger it via automation services and monitoring e.g. Azure Automation and Operations Management Suite or System Center Operations Manager.

6

You can also use the fileserver as target for different fileservers.

7

 

A different approached could be achieved when using this scenario for backup. First you backup your data to the primary fileshare or volume and replicate it to the cloud.

8

After you finished the backup you switch the volume and transfer the backup to a cheaper location e.g. Azure Cold Storage Accounts.

9

The Pro’s and Con’s

Pro Con
Easy to use ExpressRoute needed for best performance
Azure License for Azure VM might be covered by your on premises license Not documented yet and only Proof of concept
No need for expensive Storage Systems
Great to replicate File Data and Backups into the cloud

 

How to configure StorSimple virtual Array – Microsoft Video Series

Hi everybody,

like you already know Microsoft announced a StorSimple virtual Array in December 2015.

This week Program Manager Sharath Suryanaarayan  published a Guide on YouTube about how to configure it. If you are interest in that product, you shouldn’t miss his videos.
Video #01: StorSimple Virtual Array Getting Started

Video #02: StorSimple Virtual Array Create

Video #03: StorSimple Virtual Array Configuration

Video #04: StorSimple Virtual Array Create Shares

Video #05: StorSimple Virtual Array DR

Microsoft Announcing the StorSimple Virtual Array Preview

Hi everybody,

yesterday Microsoft announced the public preview of it’s new StorSimple Virtual Array. For me a great a great fit in Microsoft Cloud and Software defined strategy. The virtual array can operate under Hyper-V or VMware ESXi and work as NAS or iSCSI server to manage up to 64 TB of storage in Azure.

What’s new in the array? (quote from azure.microsoft.com)

Virtual array form factor

The StorSimple Virtual Array is a virtual machine which can be run on Hyper-V (2008 R2 and above) or VMware ESXi (5.5 and above) hypervisors. It provides the ability to configure the virtual array with data disks of different sizes to accommodate the working set of the data managed by the device. A web-based GUI that provides a fast and easy way for initial setup of the virtual array.

Multi-protocol

The virtual array can be configured as a File Server (NAS) which provides ability to create shares for users, departments and applications or as an iSCSI server (SAN) which provides ability to create volumes (LUNs) for mounting on host servers for applications and users.

Data pinning

Shares and volumes can be created as locally-pinned or tiered. Locally-pinned shares and volumes give quick access to data which will not be tiered, for example a small transactional database that requires predictable access to all data. These shares and volumes are backed up to the cloud along with tiered shares and volumes for data protection.

Data tiering

We introduced a new algorithm for calculating the most used data by defining a heat map which tracks the usage of files and blocks at a granular level. This assigns a heat value to the data based on read and write patterns. This heat map is used for tiering of data when the local tiers are full. Data with lowest heat value (coldest) tiers to the cloud first, while the data with higher heat value is retained in the local tiers of the virtual array. The data on the local tiers is the working set which is accessed frequently be the users. The heat map is backed up with every cloud snapshot to the cloud and in the event of a DR, the heat map will be used for restoring and rehydrating the data from the cloud.

Item level recovery

The virtual array, configured as a file server, provides ability for users to restore their files from recent backups using a self-service model. Every share will have a .backups folder which will contain the most recent backups. The user can navigate to the desired backup and copy the files and folders to restore them. This eliminates calls to administrators for restoring files from backups. The virtual array can restore the entire share or volume from a backup as a new share or a volume on the same virtual appliance.

Backups

 

If you want to try out the preview or get more insides please click here.

Book Review – Microsoft Azure Storage Essentials

Like some of you already know, I’m working sometimes for Packt in one of their reviewer Teams. Now the second book where I was one of the lucky guy’s who were able to review it, is public.

The book is named Microsoft Azure Storage Essentials and written by Chukri Soueidi. Let me give you a short abstract what Chukri has written about.

Harness the power of Microsoft Azure services to build efficient cloud solutions

  • Get to grips with the features of Microsoft Azure in terms of Blob, Table, Queue, and File storage
  • Learn the how and when of using the right storage service for different business use cases
  • Make use of Azure storage services in various languages with this fast-paced and easy-to-follow guide

Source: PacktPublishing

When you are interested in Azure Storage, I would highly suggest you to start with Chukri’s book. 🙂

You can order it via Packt Pub here.