@AltaroSoftware New eBook: Supercharging Hyper-V Performance

Hi everybody,

Altaro published a great new eBook “Supercharging Hyper-V Performance”.

Here’s what you can expect from this eBook:

  1. A practical guide to finding and fixing issues in storage, CPU, memory, and network components
  2. Instructions on how to use Windows Performance Monitor and PAL to monitor your virtualized environment
  3. Advice on how to plan hosts, VMs, storage, networking and management for maximum performance.

You can download it here.

Synology configuration for max. IOPS with Hyper-V

Hello everybody,

as most of you know I’m using a Synology DS1813+ as storage for my testlab. Last week I took some time to change my configuration to improve the IOPS for my Hyper-V Systems.

My old configuration with RAID 10 and Block based iSCSI LUNs only brought about 6000 IOPS, so I needed to improve to get more VMs up.

 

Let us start with the Hardwareconfiguration. I’m using a DS1813+ with 8 Disk splitted as followed.

Disktyp Count Capacity HD Vendor & Typ Disk Role
SSD 2 120 GB Kingston SV300S37A120G Cache
HDD SATA 7.200 RPM 6 1000 TB 2x Seagate ST1000DM003-1ER162
3x Seagate ST1000DM003-1CH162
1x Seagate ST31000524AS
Storage

I connected three 1 Gbit/s ports with MPIO to my SAN network. So there should be enough bandwidth.

 

My disk are configured as followed:

The SSDs are configured as Cache Volume for our later Hyper-V VM Volume. When you configure the SSC Chache choose “Read-Write”.

SY-PI-0000

 

Now back to the harddrives. You should configure RAID 10 for the disks. You can use RAID 5 too if you need the capacity but you will lose at least 1/3 of the final performance.

SY-PI-0001

 

 

Now you need to create the volume for you VMs. Here comes the point were I lose some performance because I need to create milti volumes on one RAID. For more performance you should choose single volume on RAID but you cannot use the disk any longer for more volumes.

SY-PI-0002

 

The volume must be configured with LUN allocation unit size 4KB for Windows ODX.

SY-PI-0003

 

Now you create the iSCSI Volume and Target. Here you choose regular files and advanced LUN features. If do not use thin provisioning, you will get some more IOPS too but that does not matter.

SY-PI-0004

SY-PI-0005

 

So that’s all you need to do on the storage. Next you need fomat the drive. Here I would suggest to format the driver with NTFS and 4KB unit size.

SY-PI-0006

 

 

As a resulte I got a max. from nearly 12.000 IOPS out of my Synology DS1813+ 🙂  Not bad for are SOHO Diskarray 😉

Capture

 

 

 

[huge_it_slider id=”6″][huge_it_slider id=”7″]

 

White paper – Building High Performance Storage for Hyper-V Cluster on Scale-Out

A few days ago, Microsoft published a whitepaper how to build High Performance Storage for Hyper-V Cluster on Scale-Out File Servers using Violin Windows Flash Arrays.

To take a look on, please click here.

Testing a Synology DS1813+ Part# 4 – Performance testing

Last week it was at the time to send back my toy the Synology RS2414+ but I have still my DS1813+. So I go on testing.

The DS1813+ runs currently as my central iSCSI target and holds one SMB 2.0 share.

iSCSI LUNs

iSCSI LUNs

SMB Volume

SMB Volume

All LUNs and volumes are distributed over a disc group with four 1 TB Harddisk and 5.400/7.200 RPM (estimated 98/102 IOPS each).

Disk Group

Disk Group

Addionaly I configured SSD caching for iSCSI VM01 to increase performance. SSD Cache directly impacts memory usage of a Synology system. So I’m currently limited on 124GB SSD cache because I only have 2GB of memory.

SSD cache

SSD cache

Now to my testing scenario. I had a basic load on my DS with six running VMs, where the VM files and VHD are placed on the DS.

VMs

VMs

The basic load on my DS and Hyper-V host you can see in the screenshots.

Load Hyper-V Host

Load Hyper-V Host

Load DS1813+

Load DS1813+

Now I started two storage live migration from running VMs. Both consume together around 30GB of storage. The data is transfered from my local Hyper-V host with tiered storage space via 1GBit/s SAN network (converged network over three 1Gbit/s NICs and 1GBit/s bandwidth guaranteed). The DS has three 1GBit/s NICs as Bond connected to my SAN network.

In the screenshots you can see how the resource consumtion increases.

Hyper-V Load during test.

Hyper-V Load during test.

DS load during test.

DS load during test.

Conclusion: The DS1813+ is no complete enterprise system but for small/medium environments and testing purpose, the performance is more than great. I really enjoy that piece of toy.

FIX: Slow performance for Hyper-V VMs when using VMQ with Broadcom NICs

Regarding to the issue I mentioned in my post about the performance issues from Hyper-V VMs when VMQ is enabled, Broadcom brought a driver fix for Windows Server 2012.

This fix should solve the issue.

You can download the new firmware from Broadcom or your the downloadpages from your Vendor.

 

Here the Dell Downloadlinks:

Driver download

Before you install the new driver, please uninstall the old one. 

Firmware download

 

Fixes and Enhancements Broadcom driver vers. 17.6.0.0,17.6.0.12 (Source: support.dell.com):

Fixes:
===============
– Display FCoE statistics in management
– 57800_1Gig ports displays FCoE boot version in HII
– 5720 DriverHealthProtocol status
– R720 – Broadcom 5719 PCI MAC address set to null 00-00-00-00-00-00 when using LAG in PCI slots 1-4
– NIC.FrmwImgMenu.1 is not displaying Controller BIOS Object.
– Broadcom 10 Gigabit Ethernet Driver fails EFI_DEVICE_ERROR with drvdiag
– Add ability to change TCP Delayed ACK setting on Broadcom 57711
– Add support for OOB WOL using virtual mac address
– Add support for BACS to Allow Simultaneous Multiple vPort Creation
– Broadcom 5720 NDC fail to support the VLAN tagging in UEFI
– Broadcom 57810S-k iSoE ESXi performance issue during large block Seq IO
– Remove additional MAC address displaying in device configuration menu
– Change BACS display string of ‘iSCSI’ to ‘iSCSI HBA’ in DCB menu
– Update BACs DCB description field to add details on where to enable DCB
 
Enhancements:
===============
– Added Nic Partitioning feature to new 57712 Ethernet controller chip.
– The drivers for NetXtreme II 1 Gb, NetXtreme II 10 Gb, and iSCSI offload are combined in the KMP RPM packages provided for SUSE Linux Enterprise Server 11 SP1.
– The Broadcom driver and management apps installer now provides the ability to select whether to enable the TCP Offload Engine (TOE) in Windows Server 2008 R2 when only the NetXtreme II 1 Gb device is present.
– Added Hyper-V Live Migration features with BASP teaming
– Added new Comprehensive Configuration Management to manage all MBA enabled adapters from a single banner popup.
– TCP/IP and iSCSI Offload performance improvement in a congested network.
– Added VMQ support for NetXtremeII 1G and 10G devices.
– EOL Windows 2003 support
– Added SR-IOV features support for 57712 and 578xx
– Added EEE support for 1G and 10G Base-T adapters
– Added FCoE support for 57800/57810 adapters
– No FCoE support for 57800/57810 10GBase-T adapters
– Added support for 57840 adapters
 

 

Fixes and Enhancements Broadcom Firmware vers. 7.6.15 (Source: support.dell.com):

– Add support for the 57840 adapters
– Reduce firmware update time in Windows