Azure Stack RTM PoC Deployment stops @ Step 60.120.121 – deploy identity provider

Hello Community,

some of you maybe encountered following issue during the deployment of the Azure Stack RTM PoC.

Lets look on the field configuration:

  1. One server HP DL360 G8
  2. NIC Type 1GBE Intel i360 (HP OEM Label)
  3. Two Public IPv4 Adresses published directly to the host and host configured as exposed host in the border gateway firewalls
  4. No Firewall Rules for that host on the gateways
  5. Switchports for that host configured as Trunk/Uplink ports with VLAN tagging enabled
  6. We use Azure AD for Authentication

In my case, the important point is the port trunk and the VLAN tagging.

Normally VLAN tagging is no issue because the deployment toolkit should set the tag automatically during deployment for all VMs required and the host system.

In my case and during many test and validation deployments, that didn’t happen. After I start the deployment, a new virtual switch will be deployed and a virtual NIC named “deployment” will be configured for the host. Afterwards the deployment starts. Around 3 hours later, the deployment stops in step 60.120.121 and could not connect to the identity provider.

Whats the reason for the failure?

First you should know, that the Azure Stack Deployment switches between host and BGPNAT VM for internet communication. Mostly all traffic runs through the NAT VM but in that case, the host communicates directly with the internet.

So what happend? After creating the “deployment” NIC for the host, the deployment tool didn’t set the VLAN Tag on that virtual NIC. That breaks the network communication for the host, for the VMs there isn’t any issue because the VLAN is set for the NAT VM correctly.

What is the Workaround?

  1. Start the deployment and configure it like normal
  2. Let the deployment run into the failure
  3. Open a new PowerShell with admin permissions (Run as Administrator)
  4. Type in following Command:
  5. Rerun the deployment with

    From the installation folder.

Afterwards the deployment runs smoothly.


Please be aware, after the installation, the VLAN ID is removed again. So you need to set it one more time. 

How to fix same SMBIOS ID on different Hosts

Today one post about things I see sometimes in the field.

Today I want to show you how to fix the issue when you get servers and clients with the same SMBIOS ID. Normally that would be an issue but as soon as you try to management them with System Center Virtual Machine Manager or Configuration Manager it will become one. Both tools use the SMBIOS ID to create a primary key in their databases to identify the system.



Currently I only know the following trick to fix the issue and that one would be extremly annoying on many clients or servers but it actually work.

First you need two tools.

1: Rufus – To create a bootable USB Stick

2: AMIDMI – With that tool you can overright the SMBIOS ID

Now create the Bootstick with Rufus and copy the AMIDMI file on the stick.

Reboot your from the stick.

Navigate to the folder with your AMIDMI file and run the command amidmi /u

Afterwards you can reboot the system and start Windows again.


When you are working with Virtual Machine Manager, you need to remove the host from your management consolte and add it again. After the host is discovered again, you can see the new SMBIOS ID.



I currently saw these issues with following motherboard vendors:

  1. ASRock (Client & Rack)
  2. ASUS (Client)
  3. SuperMicro (Server & ARM)
  4. Fujisu (Server)

FIX: Slow performance for Hyper-V VMs when using VMQ with Broadcom NICs

Regarding to the issue I mentioned in my post about the performance issues from Hyper-V VMs when VMQ is enabled, Broadcom brought a driver fix for Windows Server 2012.

This fix should solve the issue.

You can download the new firmware from Broadcom or your the downloadpages from your Vendor.


Here the Dell Downloadlinks:

Driver download

Before you install the new driver, please uninstall the old one. 

Firmware download


Fixes and Enhancements Broadcom driver vers., (Source:

– Display FCoE statistics in management
– 57800_1Gig ports displays FCoE boot version in HII
– 5720 DriverHealthProtocol status
– R720 – Broadcom 5719 PCI MAC address set to null 00-00-00-00-00-00 when using LAG in PCI slots 1-4
– NIC.FrmwImgMenu.1 is not displaying Controller BIOS Object.
– Broadcom 10 Gigabit Ethernet Driver fails EFI_DEVICE_ERROR with drvdiag
– Add ability to change TCP Delayed ACK setting on Broadcom 57711
– Add support for OOB WOL using virtual mac address
– Add support for BACS to Allow Simultaneous Multiple vPort Creation
– Broadcom 5720 NDC fail to support the VLAN tagging in UEFI
– Broadcom 57810S-k iSoE ESXi performance issue during large block Seq IO
– Remove additional MAC address displaying in device configuration menu
– Change BACS display string of ‘iSCSI’ to ‘iSCSI HBA’ in DCB menu
– Update BACs DCB description field to add details on where to enable DCB
– Added Nic Partitioning feature to new 57712 Ethernet controller chip.
– The drivers for NetXtreme II 1 Gb, NetXtreme II 10 Gb, and iSCSI offload are combined in the KMP RPM packages provided for SUSE Linux Enterprise Server 11 SP1.
– The Broadcom driver and management apps installer now provides the ability to select whether to enable the TCP Offload Engine (TOE) in Windows Server 2008 R2 when only the NetXtreme II 1 Gb device is present.
– Added Hyper-V Live Migration features with BASP teaming
– Added new Comprehensive Configuration Management to manage all MBA enabled adapters from a single banner popup.
– TCP/IP and iSCSI Offload performance improvement in a congested network.
– Added VMQ support for NetXtremeII 1G and 10G devices.
– EOL Windows 2003 support
– Added SR-IOV features support for 57712 and 578xx
– Added EEE support for 1G and 10G Base-T adapters
– Added FCoE support for 57800/57810 adapters
– No FCoE support for 57800/57810 10GBase-T adapters
– Added support for 57840 adapters


Fixes and Enhancements Broadcom Firmware vers. 7.6.15 (Source:

– Add support for the 57840 adapters
– Reduce firmware update time in Windows