maybe you encounter the following issue already.
When deploying the Azure Stack PoC RTM on a HPE Server, the deployment stop without any error or something at step 60.140.149.
After one week troubleshooting together with the customer and no solution, my dear coworker Alexander Ortha gave me an hint what the issue could be.
He previously hab the same issue with an HPE and he needed to update the Firmware and BIOS of the system. Afterwards the installation went through smoothly.
So I tried and he was right. So just try it yourself and update the firmware of your systems to lastest version.
some of you maybe encountered following issue during the deployment of the Azure Stack RTM PoC.
Lets look on the field configuration:
- One server HP DL360 G8
- NIC Type 1GBE Intel i360 (HP OEM Label)
- Two Public IPv4 Adresses published directly to the host and host configured as exposed host in the border gateway firewalls
- No Firewall Rules for that host on the gateways
- Switchports for that host configured as Trunk/Uplink ports with VLAN tagging enabled
- We use Azure AD for Authentication
In my case, the important point is the port trunk and the VLAN tagging.
Normally VLAN tagging is no issue because the deployment toolkit should set the tag automatically during deployment for all VMs required and the host system.
In my case and during many test and validation deployments, that didn’t happen. After I start the deployment, a new virtual switch will be deployed and a virtual NIC named “deployment” will be configured for the host. Afterwards the deployment starts. Around 3 hours later, the deployment stops in step 60.120.121 and could not connect to the identity provider.
Whats the reason for the failure?
First you should know, that the Azure Stack Deployment switches between host and BGPNAT VM for internet communication. Mostly all traffic runs through the NAT VM but in that case, the host communicates directly with the internet.
So what happend? After creating the “deployment” NIC for the host, the deployment tool didn’t set the VLAN Tag on that virtual NIC. That breaks the network communication for the host, for the VMs there isn’t any issue because the VLAN is set for the NAT VM correctly.
What is the Workaround?
- Start the deployment and configure it like normal
- Let the deployment run into the failure
- Open a new PowerShell with admin permissions (Run as Administrator)
- Type in following Command:
<span lang="EN-US" style="margin: 0px; color: black; font-family: 'Lucida Console'; font-size: 9pt;">Set-VMNetworkAdapterVlan -ManagementOS -VMNetworkAdapterName "Deployment" -Access -VlanId "VLAN ID" </span>
- Rerun the deployment with
<span lang="EN-US">.\InstallAzureStackPOC.ps1 -rerun</span>
From the installation folder.
Afterwards the deployment runs smoothly.
Please be aware, after the installation, the VLAN ID is removed again. So you need to set it one more time.
Another awesome post from Dell Software Development Engineer Paul Marquardt. He published a blog, how to create Windows 8 / Windows Server 2012 bootable USB media for deployment on UEFI based systems.
I would highly suggest you to read the blog. You can find it here.
Configuration and management recommendations and best practices for Exchange 2013 and the PS Series SAN
You can download the pdf file here.