Microsoft Announcing the StorSimple Virtual Array Preview

Hi everybody,

yesterday Microsoft announced the public preview of it’s new StorSimple Virtual Array. For me a great a great fit in Microsoft Cloud and Software defined strategy. The virtual array can operate under Hyper-V or VMware ESXi and work as NAS or iSCSI server to manage up to 64 TB of storage in Azure.

What’s new in the array? (quote from azure.microsoft.com)

Virtual array form factor

The StorSimple Virtual Array is a virtual machine which can be run on Hyper-V (2008 R2 and above) or VMware ESXi (5.5 and above) hypervisors. It provides the ability to configure the virtual array with data disks of different sizes to accommodate the working set of the data managed by the device. A web-based GUI that provides a fast and easy way for initial setup of the virtual array.

Multi-protocol

The virtual array can be configured as a File Server (NAS) which provides ability to create shares for users, departments and applications or as an iSCSI server (SAN) which provides ability to create volumes (LUNs) for mounting on host servers for applications and users.

Data pinning

Shares and volumes can be created as locally-pinned or tiered. Locally-pinned shares and volumes give quick access to data which will not be tiered, for example a small transactional database that requires predictable access to all data. These shares and volumes are backed up to the cloud along with tiered shares and volumes for data protection.

Data tiering

We introduced a new algorithm for calculating the most used data by defining a heat map which tracks the usage of files and blocks at a granular level. This assigns a heat value to the data based on read and write patterns. This heat map is used for tiering of data when the local tiers are full. Data with lowest heat value (coldest) tiers to the cloud first, while the data with higher heat value is retained in the local tiers of the virtual array. The data on the local tiers is the working set which is accessed frequently be the users. The heat map is backed up with every cloud snapshot to the cloud and in the event of a DR, the heat map will be used for restoring and rehydrating the data from the cloud.

Item level recovery

The virtual array, configured as a file server, provides ability for users to restore their files from recent backups using a self-service model. Every share will have a .backups folder which will contain the most recent backups. The user can navigate to the desired backup and copy the files and folders to restore them. This eliminates calls to administrators for restoring files from backups. The virtual array can restore the entire share or volume from a backup as a new share or a volume on the same virtual appliance.

Backups

 

If you want to try out the preview or get more insides please click here.

Virtual Machine Converter – Cannot Convert VMware VM – Disk configuration

Today I had an issue with virtual machine converter during migration of a VM from VMware to Hyper-V. The screenshot below shows the issue.

Untitled

The issue shows an configuration missmatch which stopped the convert of the VM.

The solution of the issue is pretty easy. You need to check and change the virtual machine configuration in VMware vCenter.

First you need to check the SCSI Bus logic. There you need to configure LSI Logic.

Second and more likely the reason for the issue is that one or more of your disks are configured as independed. Just uncheck the box and your fine. 🙂

Untitled

 

Removed Cluster Node still shows it’s self as a cluster member

Hi guy’s,

one thing some of you maybe notice from time to time. When you evict a node from a cluster it can happen that the cluster node it self says it belongs still to a cluster and your not able to force it into a new one or use the node as independent server.

2015-08-08_20-15-37

 

The reason for that is quite simple. There are some points which are configured in a AD Computer Account and DNS for a Cluster Node. Sometimes it happens, that not all attributes are deleted during evicting the node. Most likely it is the following attribute.

2015-08-08_20-14-47

 

So now there are three way’s to solve the issue:

1. Remove the the failover clutser feature from your node, reboot and reinstall it if needed. That fixes the issue in 80% of all cases (in my personal experience) .

2015-08-08_20-33-10

 

2. Remove the cluster node from active directory, delete the computer objekt and rejoin the node. That work in 100% of all cases because you have a totally new computer object and GUID with no old stuff in.

2015-08-08_20-33-52

3. Or for the guy’s and girls who love some pain. Search your AD Computer Attributes and DNS for all cluster entries where the fault node is still in and edit the entries. I wouldn’t suggest it because it is very risky and takes very long time.

 

How to move a cluster group & ressource in Microsoft Failover Cluster

Today I want to show you two ways to move the cluster ressource group and witness to another owner within the cluster. There are some scenarios where that could become necessary e.g. a planned maintenance.

To move the cluster resource is not that easy like moving a cluster shared volume. There is no option “move to best possible node” for the witness.

Wit01

Witness Options

2015-08-06_12-03-49

Cluster shared Volume options

So now there are two ways to move the Witness and cluster resources.

1. PowerShell

One you cluster node you run following command

 

2. Failover Cluster Manager Interface

For that you click on your cluster name and navigate on the right action panel to “More Actions”. There you have “Move Core Cluster Ressources”

2015-08-06_11-12-56

Differences in Cluster Traffic with different workloads

Hi all,

last week I was at a customer and I had the chance to do some screenshots from two cluster which show how many heartbeat traffic a cluster can produce.

1. A two node cluster with four virtual machines. There you have a great heartbeat with 500 Kbit/s on the network. So pretty low.

Traffic01

 

2. A six node cluster with 215 virtual machines. Without any external action the cluster heartbeat has a continues traffic from around 60 MBit/s. I started some livemigrations and the heartbeat partly jumps up to 150 MBit/s.

Traffic02

 

So my conlusion, the heartbeat traffic multiplies with the load and number of nodes within the cluster.