Azure Stack PoC deployment stops at step 60.140.149 on HP Server

Hi everyone,

maybe you encounter the following issue already.

When deploying the Azure Stack PoC RTM on a HPE Server, the deployment stop without any error or something at step 60.140.149.

After one week troubleshooting together with the customer and no solution, my dear coworker Alexander Ortha gave me an hint what the issue could be.

He previously hab the same issue with an HPE and he needed to update the Firmware and BIOS of the system. Afterwards the installation went through smoothly.

So I tried and he was right. So just try it yourself and update the firmware of your systems to lastest version.

Cheers,

Flo

Azure Stack RTM PoC Deployment stops @ Step 60.120.121 – deploy identity provider

Hello Community,

some of you maybe encountered following issue during the deployment of the Azure Stack RTM PoC.

Lets look on the field configuration:

  1. One server HP DL360 G8
  2. NIC Type 1GBE Intel i360 (HP OEM Label)
  3. Two Public IPv4 Adresses published directly to the host and host configured as exposed host in the border gateway firewalls
  4. No Firewall Rules for that host on the gateways
  5. Switchports for that host configured as Trunk/Uplink ports with VLAN tagging enabled
  6. We use Azure AD for Authentication

In my case, the important point is the port trunk and the VLAN tagging.

Normally VLAN tagging is no issue because the deployment toolkit should set the tag automatically during deployment for all VMs required and the host system.

In my case and during many test and validation deployments, that didn’t happen. After I start the deployment, a new virtual switch will be deployed and a virtual NIC named “deployment” will be configured for the host. Afterwards the deployment starts. Around 3 hours later, the deployment stops in step 60.120.121 and could not connect to the identity provider.

Whats the reason for the failure?

First you should know, that the Azure Stack Deployment switches between host and BGPNAT VM for internet communication. Mostly all traffic runs through the NAT VM but in that case, the host communicates directly with the internet.

So what happend? After creating the “deployment” NIC for the host, the deployment tool didn’t set the VLAN Tag on that virtual NIC. That breaks the network communication for the host, for the VMs there isn’t any issue because the VLAN is set for the NAT VM correctly.

What is the Workaround?

  1. Start the deployment and configure it like normal
  2. Let the deployment run into the failure
  3. Open a new PowerShell with admin permissions (Run as Administrator)
  4. Type in following Command:
  5. Rerun the deployment with

    From the installation folder.

Afterwards the deployment runs smoothly.

 

Please be aware, after the installation, the VLAN ID is removed again. So you need to set it one more time. 

Azure Stack Technical Preview (POC): Hardware requirements – published

Hi everybody,

even with released date probably moved to Q4/2016 or somewhen 2017 Microsoft published more and more information about it’s new Azure Stack.

Yesterday they published the Hardware requirements for Azure Stack, which you can find on the original Blogpost here.


Source: http://blogs.technet.com/b/server-cloud/archive/2015/12/21/microsoft-azure-stack-hardware-requirements.aspx

Hardware requirements for Azure Stack Technical Preview (POC)

Note that these requirements only apply to the upcoming POC release, they may change for future releases.

Component

Minimum

Recommended

Compute: CPU Dual-Socket: 12 Physical Cores Dual-Socket: 16 Physical Cores
Compute: Memory 96 GB RAM 128 GB RAM
Compute: BIOS Hyper-V Enabled (with SLAT support) Hyper-V Enabled (with SLAT support)
Network: NIC Windows Server 2012 R2 Certification required for NIC; no specialized features required Windows Server 2012 R2 Certification required for NIC; no specialized features required
Disk drives: Operating System 1 OS disk with minimum of 200 GB available for system partition (SSD or HDD) 1 OS disk with minimum of 200 GB available for system partition (SSD or HDD)
Disk drives: General Azure Stack POC Data 4 disks. Each disk provides a minimum of 140 GB of capacity (SSD or HDD). 4 disks. Each disk provides a minimum of 250 GB of capacity.
HW logo certification Certified for Windows Server 2012 R2

Storage considerations

Data disk drive configuration: All data drives must be of the same type (SAS or SATA) and capacity.  If SAS disk drives are used, the disk drives must be attached via a single path (no MPIO, multi-path support is provided)
HBA configuration options:
     1. (Preferred) Simple HBA
2. RAID HBA – Adapter must be configured in “pass through” mode
3. RAID HBA – Disks should be configured as Single-Disk, RAID-0
Supported bus and media type combinations

  •          SATA HDD
  •          SAS HDD
  •          RAID HDD
  •          RAID SSD (If the media type is unspecified/unknown*)
  •          SATA SSD + SATA HDD**
  •          SAS SSD + SAS HDD**

* RAID controllers without pass-through capability can’t recognize the media type. Such controllers will mark both HDD and SSD as Unspecified. In that case, the SSD will be used as persistent storage instead of caching devices. Therefore, you can deploy the Microsoft Azure Stack POC on those SSDs.

** For tiered storage, you must have at least 3 HDDs.

Example HBAs: LSI 9207-8i, LSI-9300-8i, or LSI-9265-8i in pass-through mode

 

While the above configuration is generic enough that many servers should fit the description, we recommend a couple of SKUs: Dell R630 and the HPE DL 360 Gen 9. Both these SKUs have been in-market for some time.