Azure Stack Technical Preview (POC): Hardware requirements – published

Hi everybody,

even with released date probably moved to Q4/2016 or somewhen 2017 Microsoft published more and more information about it’s new Azure Stack.

Yesterday they published the Hardware requirements for Azure Stack, which you can find on the original Blogpost here.


Source: http://blogs.technet.com/b/server-cloud/archive/2015/12/21/microsoft-azure-stack-hardware-requirements.aspx

Hardware requirements for Azure Stack Technical Preview (POC)

Note that these requirements only apply to the upcoming POC release, they may change for future releases.

Component

Minimum

Recommended

Compute: CPU Dual-Socket: 12 Physical Cores Dual-Socket: 16 Physical Cores
Compute: Memory 96 GB RAM 128 GB RAM
Compute: BIOS Hyper-V Enabled (with SLAT support) Hyper-V Enabled (with SLAT support)
Network: NIC Windows Server 2012 R2 Certification required for NIC; no specialized features required Windows Server 2012 R2 Certification required for NIC; no specialized features required
Disk drives: Operating System 1 OS disk with minimum of 200 GB available for system partition (SSD or HDD) 1 OS disk with minimum of 200 GB available for system partition (SSD or HDD)
Disk drives: General Azure Stack POC Data 4 disks. Each disk provides a minimum of 140 GB of capacity (SSD or HDD). 4 disks. Each disk provides a minimum of 250 GB of capacity.
HW logo certification Certified for Windows Server 2012 R2

Storage considerations

Data disk drive configuration: All data drives must be of the same type (SAS or SATA) and capacity.  If SAS disk drives are used, the disk drives must be attached via a single path (no MPIO, multi-path support is provided)
HBA configuration options:
     1. (Preferred) Simple HBA
2. RAID HBA – Adapter must be configured in “pass through” mode
3. RAID HBA – Disks should be configured as Single-Disk, RAID-0
Supported bus and media type combinations

  •          SATA HDD
  •          SAS HDD
  •          RAID HDD
  •          RAID SSD (If the media type is unspecified/unknown*)
  •          SATA SSD + SATA HDD**
  •          SAS SSD + SAS HDD**

* RAID controllers without pass-through capability can’t recognize the media type. Such controllers will mark both HDD and SSD as Unspecified. In that case, the SSD will be used as persistent storage instead of caching devices. Therefore, you can deploy the Microsoft Azure Stack POC on those SSDs.

** For tiered storage, you must have at least 3 HDDs.

Example HBAs: LSI 9207-8i, LSI-9300-8i, or LSI-9265-8i in pass-through mode

 

While the above configuration is generic enough that many servers should fit the description, we recommend a couple of SKUs: Dell R630 and the HPE DL 360 Gen 9. Both these SKUs have been in-market for some time.

Dell Blog – Remotely Configure System Settings for Guarded Fabric Host Deployment in Windows Server 2016

Hi Everybody,

Dell published a blog on it’s TechCenter how to remotly configure Guarded Fabric on it’s Generation 13 Servers.

Check it out here.  🙂

Microsoft Announcing the StorSimple Virtual Array Preview

Hi everybody,

yesterday Microsoft announced the public preview of it’s new StorSimple Virtual Array. For me a great a great fit in Microsoft Cloud and Software defined strategy. The virtual array can operate under Hyper-V or VMware ESXi and work as NAS or iSCSI server to manage up to 64 TB of storage in Azure.

What’s new in the array? (quote from azure.microsoft.com)

Virtual array form factor

The StorSimple Virtual Array is a virtual machine which can be run on Hyper-V (2008 R2 and above) or VMware ESXi (5.5 and above) hypervisors. It provides the ability to configure the virtual array with data disks of different sizes to accommodate the working set of the data managed by the device. A web-based GUI that provides a fast and easy way for initial setup of the virtual array.

Multi-protocol

The virtual array can be configured as a File Server (NAS) which provides ability to create shares for users, departments and applications or as an iSCSI server (SAN) which provides ability to create volumes (LUNs) for mounting on host servers for applications and users.

Data pinning

Shares and volumes can be created as locally-pinned or tiered. Locally-pinned shares and volumes give quick access to data which will not be tiered, for example a small transactional database that requires predictable access to all data. These shares and volumes are backed up to the cloud along with tiered shares and volumes for data protection.

Data tiering

We introduced a new algorithm for calculating the most used data by defining a heat map which tracks the usage of files and blocks at a granular level. This assigns a heat value to the data based on read and write patterns. This heat map is used for tiering of data when the local tiers are full. Data with lowest heat value (coldest) tiers to the cloud first, while the data with higher heat value is retained in the local tiers of the virtual array. The data on the local tiers is the working set which is accessed frequently be the users. The heat map is backed up with every cloud snapshot to the cloud and in the event of a DR, the heat map will be used for restoring and rehydrating the data from the cloud.

Item level recovery

The virtual array, configured as a file server, provides ability for users to restore their files from recent backups using a self-service model. Every share will have a .backups folder which will contain the most recent backups. The user can navigate to the desired backup and copy the files and folders to restore them. This eliminates calls to administrators for restoring files from backups. The virtual array can restore the entire share or volume from a backup as a new share or a volume on the same virtual appliance.

Backups

 

If you want to try out the preview or get more insides please click here.

Save the Date: WEBINAR | Scripting & Automation in Hyper-V without SCVMM with @ThomasMaurer

Hi everyone,

you should save the 10th December 2015 in your calender’s. My friend Thomas Maurer is presenting some cool Scripting Stuff for Hyper-V together with Altaro. 🙂

Click here to register for the event.

System Center Virtual Machine Manager (SCVMM) provides some great automation benefits for those organizations that can afford the hefty price tag. However, if SCVMM isn’t a cost effective solution for your business, what are you to do? While VMM certainly makes automation much easier, you can achieve a good level of automation with PowerShell and the applicable PowerShell modules for Hyper-V, clustering, storage, and more.

Are you looking to get grips with automation and scripting?

Join Thomas Maurer, Microsoft Datacenter and Cloud Management MVP, who will use this webinar to show you how to achieve automation in your Hyper-V environments, even if you don’t have SCVMM.

Remember, any task you have to do more than once, should be automated. Bring some sanity to your virtual environment by adding some scripting and automation know-how to your toolbox.

How to fix same SMBIOS ID on different Hosts

Today one post about things I see sometimes in the field.

Today I want to show you how to fix the issue when you get servers and clients with the same SMBIOS ID. Normally that would be an issue but as soon as you try to management them with System Center Virtual Machine Manager or Configuration Manager it will become one. Both tools use the SMBIOS ID to create a primary key in their databases to identify the system.

2015-11-29_14-02-03

 

Currently I only know the following trick to fix the issue and that one would be extremly annoying on many clients or servers but it actually work.

First you need two tools.

1: Rufus – To create a bootable USB Stick

2: AMIDMI – With that tool you can overright the SMBIOS ID

Now create the Bootstick with Rufus and copy the AMIDMI file on the stick.

Reboot your from the stick.

Navigate to the folder with your AMIDMI file and run the command amidmi /u

Afterwards you can reboot the system and start Windows again.

 

When you are working with Virtual Machine Manager, you need to remove the host from your management consolte and add it again. After the host is discovered again, you can see the new SMBIOS ID.

2015-11-29_14-02-50

 

I currently saw these issues with following motherboard vendors:

  1. ASRock (Client & Rack)
  2. ASUS (Client)
  3. SuperMicro (Server & ARM)
  4. Fujisu (Server)