Virtual Switches in Windows Server 2016 Hyper-V

With coming nearer to the release of Windows Server 2016, more and more details about he final server are revealed.

Today I want you give a short list about the switches which will be part of Hyper-V in Windows Server 2016.

“Classical” Switches

Switchtype Discription Source
Private Switch The private switch allows communications among the virtual machines on the host and nothing else. Even the management operating system is not allowed to participate. This switch is purely logical and does not use any physical adapter in any way. “Private” in this sense is not related to private IP addressing. You can mentally think of this as a switch that has no ability to uplink to other switches. http://www.altaro.com/hyper-v/the-hyper-v-virtual-switch-explained-part-1/
Internal Switch The internal switch is similar to the private switch with one exception: the management operating system can have a virtual adapter on this type of switch and communicate with any virtual machines that also have virtual adapters on the switch. This switch also does not have any matching to a physical adapter and therefore also cannot uplink to another switch. http://www.altaro.com/hyper-v/the-hyper-v-virtual-switch-explained-part-1/
External This switch type must be connected to a physical adapter. It allows communications between the physical network and the management operating system and virtual machines. Do not confuse this switch type with public IP addressing schemes or let its name suggest that it needs to be connected to a public-facing connection. You can use the same private IP address range for the adapters on an external virtual switch that you’re using on the physical network it’s attached to. http://www.altaro.com/hyper-v/the-hyper-v-virtual-switch-explained-part-1/

New Switches available in Windows Server 2016

Switchtype Discription Source
Externals Switch with SET SET (Switch embedded Teaming) is an alternative NIC Teaming solution that you can use in environments that include Hyper-V and the Software Defined Networking (SDN) stack in Windows Server 2016 Technical Preview. SET integrates some NIC Teaming functionality into the Hyper-V Virtual Switch.SET allows you to group between one and eight physical Ethernet network adapters into one or more software-based virtual network adapters. These virtual network adapters provide fast performance and fault tolerance in the event of a network adapter failure. SET member network adapters must all be installed in the same physical Hyper-V host to be placed in a team. https://technet.microsoft.com/en-US/library/mt403349.aspx#bkmk_sswitchembedded
NAT Mode Switch With the latest releases of the Windows 10 and Windows Server 2016 Technical Preview 4, Microsoft included a new VM Switch Type called NAT, which allows Virtual Machines to have a Internal Network and connect to the external world and internet using NAT. http://www.thomasmaurer.ch/2015/11/hyper-v-virtual-switch-using-nat-configuration/

Virtual Switches in System Center Virtual Machine Manager

Switchtype Discription
Standard Switch A Standard Switch is basicly a Hyper-V Switch shown in virtual machine manager. From the management and feature perspective there are no differences.
Logical Switch A Logical Switch includes Virtual Switch Extensions, Uplink Port Profiles which define the physical network adapters used by the Hyper-V Virtual Switch for example for teaming and the Virtual Adapter Port Profiles mapped to Port Classifications which are the settings for the Virtual Network Adapters of the virtual machines.

Not really a switch but part of the Hyper-V networking stack and currently necessary in multi tenant scenarios.

Type Discription Source
Multi Tenant Gateway In Windows Server 2012 R2, the Remote Access server role includes the Routing and Remote Access Service (RRAS) role service. RRAS is integrated with Hyper-V Network Virtualization, and is able to route network traffic effectively in circumstances where there are many different customers – or tenants – who have isolated virtual networks in the same datacenter.Multi-tenancy is the ability of a cloud infrastructure to support the virtual machine workloads of multiple tenants, but isolate them from each other, while all of the workloads run on the same infrastructure. The multiple workloads of an individual tenant can interconnect and be managed remotely, but these systems do not interconnect with the workloads of other tenants, nor can other tenants remotely manage them. https://technet.microsoft.com/en-us/library/dn641937.aspx

New Blogpost about Azure Backupserver @ Azure Community Deutschland

Hi everybody,

a few minutes ago my blogpost about Microsoft Azure Backup Server went only @ Azure Community Deutschland.

To read it please click here.

Azure Stack Technical Preview (POC): Hardware requirements – published

Hi everybody,

even with released date probably moved to Q4/2016 or somewhen 2017 Microsoft published more and more information about it’s new Azure Stack.

Yesterday they published the Hardware requirements for Azure Stack, which you can find on the original Blogpost here.


Source: http://blogs.technet.com/b/server-cloud/archive/2015/12/21/microsoft-azure-stack-hardware-requirements.aspx

Hardware requirements for Azure Stack Technical Preview (POC)

Note that these requirements only apply to the upcoming POC release, they may change for future releases.

Component

Minimum

Recommended

Compute: CPU Dual-Socket: 12 Physical Cores Dual-Socket: 16 Physical Cores
Compute: Memory 96 GB RAM 128 GB RAM
Compute: BIOS Hyper-V Enabled (with SLAT support) Hyper-V Enabled (with SLAT support)
Network: NIC Windows Server 2012 R2 Certification required for NIC; no specialized features required Windows Server 2012 R2 Certification required for NIC; no specialized features required
Disk drives: Operating System 1 OS disk with minimum of 200 GB available for system partition (SSD or HDD) 1 OS disk with minimum of 200 GB available for system partition (SSD or HDD)
Disk drives: General Azure Stack POC Data 4 disks. Each disk provides a minimum of 140 GB of capacity (SSD or HDD). 4 disks. Each disk provides a minimum of 250 GB of capacity.
HW logo certification Certified for Windows Server 2012 R2

Storage considerations

Data disk drive configuration: All data drives must be of the same type (SAS or SATA) and capacity.  If SAS disk drives are used, the disk drives must be attached via a single path (no MPIO, multi-path support is provided)
HBA configuration options:
     1. (Preferred) Simple HBA
2. RAID HBA – Adapter must be configured in “pass through” mode
3. RAID HBA – Disks should be configured as Single-Disk, RAID-0
Supported bus and media type combinations

  •          SATA HDD
  •          SAS HDD
  •          RAID HDD
  •          RAID SSD (If the media type is unspecified/unknown*)
  •          SATA SSD + SATA HDD**
  •          SAS SSD + SAS HDD**

* RAID controllers without pass-through capability can’t recognize the media type. Such controllers will mark both HDD and SSD as Unspecified. In that case, the SSD will be used as persistent storage instead of caching devices. Therefore, you can deploy the Microsoft Azure Stack POC on those SSDs.

** For tiered storage, you must have at least 3 HDDs.

Example HBAs: LSI 9207-8i, LSI-9300-8i, or LSI-9265-8i in pass-through mode

 

While the above configuration is generic enough that many servers should fit the description, we recommend a couple of SKUs: Dell R630 and the HPE DL 360 Gen 9. Both these SKUs have been in-market for some time.

Microsoft Announcing the StorSimple Virtual Array Preview

Hi everybody,

yesterday Microsoft announced the public preview of it’s new StorSimple Virtual Array. For me a great a great fit in Microsoft Cloud and Software defined strategy. The virtual array can operate under Hyper-V or VMware ESXi and work as NAS or iSCSI server to manage up to 64 TB of storage in Azure.

What’s new in the array? (quote from azure.microsoft.com)

Virtual array form factor

The StorSimple Virtual Array is a virtual machine which can be run on Hyper-V (2008 R2 and above) or VMware ESXi (5.5 and above) hypervisors. It provides the ability to configure the virtual array with data disks of different sizes to accommodate the working set of the data managed by the device. A web-based GUI that provides a fast and easy way for initial setup of the virtual array.

Multi-protocol

The virtual array can be configured as a File Server (NAS) which provides ability to create shares for users, departments and applications or as an iSCSI server (SAN) which provides ability to create volumes (LUNs) for mounting on host servers for applications and users.

Data pinning

Shares and volumes can be created as locally-pinned or tiered. Locally-pinned shares and volumes give quick access to data which will not be tiered, for example a small transactional database that requires predictable access to all data. These shares and volumes are backed up to the cloud along with tiered shares and volumes for data protection.

Data tiering

We introduced a new algorithm for calculating the most used data by defining a heat map which tracks the usage of files and blocks at a granular level. This assigns a heat value to the data based on read and write patterns. This heat map is used for tiering of data when the local tiers are full. Data with lowest heat value (coldest) tiers to the cloud first, while the data with higher heat value is retained in the local tiers of the virtual array. The data on the local tiers is the working set which is accessed frequently be the users. The heat map is backed up with every cloud snapshot to the cloud and in the event of a DR, the heat map will be used for restoring and rehydrating the data from the cloud.

Item level recovery

The virtual array, configured as a file server, provides ability for users to restore their files from recent backups using a self-service model. Every share will have a .backups folder which will contain the most recent backups. The user can navigate to the desired backup and copy the files and folders to restore them. This eliminates calls to administrators for restoring files from backups. The virtual array can restore the entire share or volume from a backup as a new share or a volume on the same virtual appliance.

Backups

 

If you want to try out the preview or get more insides please click here.

Hyper-V|W2k12R2|4x1GB|2xFC

Hyper-V Cluster Network configuration with following parameters:

The following configuration leverages 4x 1GB Ethernet and 2x Fibre channel connections. The storage can be connected via Fibre Channel with MPIO. The configurations uses physical configuration and software defined / converged network for Hyper-V.


 Pro’s and Con’s of that solution

 Pro Con
– High Bandwidth for VM- Good Bandwidth for Storage
– Fault redundant
– Can be used in switch independent or LACP (with stacked switches) teaming mode
– Fibrechannel ist most common SAN technology
– Limited Bandwidth for Livemigration
– a lot of technologies involved

 Switches

Switch name Bandwidth Switchtyp
1GBE SW01 1 GBit/s physical stacked or independed
1GBE SW02 1 GBit/s physical stacked or independed
FC SW01 4/8 GB FC/s physical stacked or independed
FC SW02 4/8 GB FC/s physical stacked or independed
SoftSW01 1 GBit/s Software defined / converged
SoftSW02 1 GBit/s Software defined / converged

 Neccessary Networks

Networkname VLAN IP Network (IPv4) Connected to Switch
Management 100 10.11.100.0/24 SoftSW01
Cluster 101 10.11.101.0/24  SoftSW01
Livemigration 450 10.11.45.0/24  SoftSW01
Virtual Machines 200 – x 10.11.x.x/x  SoftSW02

 Possible rearview Server

NIC17


 Schematic representation

NIC14 NIC15

Switch Port Configuration

NIC16  

Bandwidth Configuration vNICs

vNIC min. Bandwidth Weight PowerShell Command
Management 20%
Cluster 10%
Livemigration 40%

QoS Configuration Switch

Networkname Priority
Management medium
Cluster high
Livemigration medium
VMs dependig on VM Workload