Introduction into Azure Bastion

Hi everyone,

you maybe hear about Azure Bastion right now. With Azure Bastion you can directly open an HTTPs Session via the Azure Portal and RDP/SSH into a Azure VM without using a public IP for the VM. So there is no need for public IP at a VM or VPN within the VNet.

Basicly Azure Bastion is a Jump Server or Bastard Server as a Service within an Azure Network.

The following videos gives you a short introduction into Azure Bastion.

If you want to enable Azure Bastion into your subscription you will find a great resource with the Azure documentation following the below link.

https://docs.microsoft.com/en-us/azure/bastion/bastion-create-host-portal

Currently Bastion has a very limited feature set and only provides the service per VNet. Later down the roadmap Microsoft will add more Features like Multi Factor Authentaction and Azure AD support as well as support for VNet Peering.

My new article with @AltaroSoftware – How to Boost Network Performance Inside China’s Great Firewall

Together with Altaro, I wrote a new article about how to improve the performance for users inside of china using services and cloud services outside of china.

I hope you enjoy reading.

Azure Stack RTM PoC Deployment stops @ Step 60.120.121 – deploy identity provider

Hello Community,

some of you maybe encountered following issue during the deployment of the Azure Stack RTM PoC.

Lets look on the field configuration:

  1. One server HP DL360 G8
  2. NIC Type 1GBE Intel i360 (HP OEM Label)
  3. Two Public IPv4 Adresses published directly to the host and host configured as exposed host in the border gateway firewalls
  4. No Firewall Rules for that host on the gateways
  5. Switchports for that host configured as Trunk/Uplink ports with VLAN tagging enabled
  6. We use Azure AD for Authentication

In my case, the important point is the port trunk and the VLAN tagging.

Normally VLAN tagging is no issue because the deployment toolkit should set the tag automatically during deployment for all VMs required and the host system.

In my case and during many test and validation deployments, that didn’t happen. After I start the deployment, a new virtual switch will be deployed and a virtual NIC named “deployment” will be configured for the host. Afterwards the deployment starts. Around 3 hours later, the deployment stops in step 60.120.121 and could not connect to the identity provider.

Whats the reason for the failure?

First you should know, that the Azure Stack Deployment switches between host and BGPNAT VM for internet communication. Mostly all traffic runs through the NAT VM but in that case, the host communicates directly with the internet.

So what happend? After creating the “deployment” NIC for the host, the deployment tool didn’t set the VLAN Tag on that virtual NIC. That breaks the network communication for the host, for the VMs there isn’t any issue because the VLAN is set for the NAT VM correctly.

What is the Workaround?

  1. Start the deployment and configure it like normal
  2. Let the deployment run into the failure
  3. Open a new PowerShell with admin permissions (Run as Administrator)
  4. Type in following Command:
  5. Rerun the deployment with

    From the installation folder.

Afterwards the deployment runs smoothly.

 

Please be aware, after the installation, the VLAN ID is removed again. So you need to set it one more time. 

Hyper-V|W2k12R2|4x1GB|2xFC

Hyper-V Cluster Network configuration with following parameters:

The following configuration leverages 4x 1GB Ethernet and 2x Fibre channel connections. The storage can be connected via Fibre Channel with MPIO. The configurations uses physical configuration and software defined / converged network for Hyper-V.


 Pro’s and Con’s of that solution

 Pro Con
– High Bandwidth for VM- Good Bandwidth for Storage
– Fault redundant
– Can be used in switch independent or LACP (with stacked switches) teaming mode
– Fibrechannel ist most common SAN technology
– Limited Bandwidth for Livemigration
– a lot of technologies involved

 Switches

Switch name Bandwidth Switchtyp
1GBE SW01 1 GBit/s physical stacked or independed
1GBE SW02 1 GBit/s physical stacked or independed
FC SW01 4/8 GB FC/s physical stacked or independed
FC SW02 4/8 GB FC/s physical stacked or independed
SoftSW01 1 GBit/s Software defined / converged
SoftSW02 1 GBit/s Software defined / converged

 Neccessary Networks

Networkname VLAN IP Network (IPv4) Connected to Switch
Management 100 10.11.100.0/24 SoftSW01
Cluster 101 10.11.101.0/24  SoftSW01
Livemigration 450 10.11.45.0/24  SoftSW01
Virtual Machines 200 – x 10.11.x.x/x  SoftSW02

 Possible rearview Server

NIC17


 Schematic representation

NIC14 NIC15

Switch Port Configuration

NIC16  

Bandwidth Configuration vNICs

vNIC min. Bandwidth Weight PowerShell Command
Management 20%
Cluster 10%
Livemigration 40%

QoS Configuration Switch

Networkname Priority
Management medium
Cluster high
Livemigration medium
VMs dependig on VM Workload

 

Hyper-V|W2k12R2|8x1GB|2x10GB

Hyper-V Cluster Network configuration with following parameters:

The following configuration leverages 8x 1GB Ethernet NICs and LOM (LAN on Motherboard) and 2x 10GB Ethernet NICs. The storage can be connected via iSCSI with MPIO or SMB 3.x.x without RDMA. The configurations uses physical configuration and software defined / converged network for Hyper-V.


 Pro’s and Con’s of that solution

 Pro Con
– Good Bandwidth for VM
– Good Bandwidth for Storage
– Separated NICs for Livemigration, Cluster and Management
– Full fault redundant
– Can be used in switch independent or LACP (with stacked switches) teaming mode
– Only one Hardware Technology is used
– Network becomes limited by a large number of VMs
– Combination between Hardware Defined and Software Defined Network is sometimes hardly to understand

 Switches

1GBE SW01 1 GBit/s physical stacked or independed
1GBE SW02 1 GBit/s physical stacked or independed
10GBE SW01 10 GBit/s physical stacked or independed
10GBE SW02 10 GBit/s physical stacked or independed
SoftSW01 10 GBit/s Software defined / converged
SoftSW02 10 GBit/s Software defined / converged

 Neccessary Networks

Networkname VLAN IP Network (IPv4) Connected to Switch
Management 10 10.11.10.0/24 SoftSW01
Cluster 11 10.11.11.0/24  SoftSW01
Livemigration 45 10.11.45.0/24 1GBE SW01 / 1GBE SW02
With iSCSI – Storage 40 10.11.40.0/24 10GBE SW01 / 10GBE SW02
With SMB – Storage 5051 10.11.50.0/2410.11.51.0/24 10GBE SW0110GBE SW02
Virtual Machines 200 – x 10.11.x.x/x 1GBE SW01 / 1GBE SW02

 Possible rearview Server


 Schematic representation


Switch Port Configuration

   

Bandwidth Configuration vNICs

vNIC min. Bandwidth Weight PowerShell Command
Management 10%
Cluster 5%

QoS Configuration Switch

Networkname Priority
Management medium
Cluster high
Storage high
Livemigration medium
VMs dependig on VM Workload