With coming nearer to the release of Windows Server 2016, more and more details about he final server are revealed.
Today I want you give a short list about the switches which will be part of Hyper-V in Windows Server 2016.
||The private switch allows communications among the virtual machines on the host and nothing else. Even the management operating system is not allowed to participate. This switch is purely logical and does not use any physical adapter in any way. “Private” in this sense is not related to private IP addressing. You can mentally think of this as a switch that has no ability to uplink to other switches.
||The internal switch is similar to the private switch with one exception: the management operating system can have a virtual adapter on this type of switch and communicate with any virtual machines that also have virtual adapters on the switch. This switch also does not have any matching to a physical adapter and therefore also cannot uplink to another switch.
||This switch type must be connected to a physical adapter. It allows communications between the physical network and the management operating system and virtual machines. Do not confuse this switch type with public IP addressing schemes or let its name suggest that it needs to be connected to a public-facing connection. You can use the same private IP address range for the adapters on an external virtual switch that you’re using on the physical network it’s attached to.
New Switches available in Windows Server 2016
|Externals Switch with SET
||SET (Switch embedded Teaming) is an alternative NIC Teaming solution that you can use in environments that include Hyper-V and the Software Defined Networking (SDN) stack in Windows Server 2016 Technical Preview. SET integrates some NIC Teaming functionality into the Hyper-V Virtual Switch.SET allows you to group between one and eight physical Ethernet network adapters into one or more software-based virtual network adapters. These virtual network adapters provide fast performance and fault tolerance in the event of a network adapter failure. SET member network adapters must all be installed in the same physical Hyper-V host to be placed in a team.
|NAT Mode Switch
||With the latest releases of the Windows 10 and Windows Server 2016 Technical Preview 4, Microsoft included a new VM Switch Type called NAT, which allows Virtual Machines to have a Internal Network and connect to the external world and internet using NAT.
Virtual Switches in System Center Virtual Machine Manager
||A Standard Switch is basicly a Hyper-V Switch shown in virtual machine manager. From the management and feature perspective there are no differences.
||A Logical Switch includes Virtual Switch Extensions, Uplink Port Profiles which define the physical network adapters used by the Hyper-V Virtual Switch for example for teaming and the Virtual Adapter Port Profiles mapped to Port Classifications which are the settings for the Virtual Network Adapters of the virtual machines.
Not really a switch but part of the Hyper-V networking stack and currently necessary in multi tenant scenarios.
|Multi Tenant Gateway
||In Windows Server 2012 R2, the Remote Access server role includes the Routing and Remote Access Service (RRAS) role service. RRAS is integrated with Hyper-V Network Virtualization, and is able to route network traffic effectively in circumstances where there are many different customers – or tenants – who have isolated virtual networks in the same datacenter.Multi-tenancy is the ability of a cloud infrastructure to support the virtual machine workloads of multiple tenants, but isolate them from each other, while all of the workloads run on the same infrastructure. The multiple workloads of an individual tenant can interconnect and be managed remotely, but these systems do not interconnect with the workloads of other tenants, nor can other tenants remotely manage them.
Today one post about things I see sometimes in the field.
Today I want to show you how to fix the issue when you get servers and clients with the same SMBIOS ID. Normally that would be an issue but as soon as you try to management them with System Center Virtual Machine Manager or Configuration Manager it will become one. Both tools use the SMBIOS ID to create a primary key in their databases to identify the system.
Currently I only know the following trick to fix the issue and that one would be extremly annoying on many clients or servers but it actually work.
First you need two tools.
1: Rufus – To create a bootable USB Stick
2: AMIDMI – With that tool you can overright the SMBIOS ID
Now create the Bootstick with Rufus and copy the AMIDMI file on the stick.
Reboot your from the stick.
Navigate to the folder with your AMIDMI file and run the command amidmi /u
Afterwards you can reboot the system and start Windows again.
When you are working with Virtual Machine Manager, you need to remove the host from your management consolte and add it again. After the host is discovered again, you can see the new SMBIOS ID.
I currently saw these issues with following motherboard vendors:
- ASRock (Client & Rack)
- ASUS (Client)
- SuperMicro (Server & ARM)
- Fujisu (Server)
today another story I see most of the days when I do Healthcheck on customer site.
One of the first things I found was a new VMM installation on top of a new Microsoft Hyper-V Cluster. The issue was, that the cluster was installed first and every thing was running on Hyper-V converged network and standard Hyper-V Switches. There was no VMM network configured or the host made compliant within VMM.
Why is that so bad? It’s bad because VMM uses it’s own kind of switches (logical switches) and needs some additional configurations to manage the hosts in the best possible way.
When I ask guy’s why they do it in that way, I normally get the answer “how should I configure the host when no VMM is in place before he install the cluster?”.
So now my answer and how you can do it in the right way.
- Install a Hyper-V Host as Standalone host
- Configure and install the VMs for VMM and SQL Server (if needed) on the standalone host
- Performe the ful VMM configuration
- Install the other Hyper-V Hosts and roll out the VMM configuration to that hosts
- Cluster the Hyper-V Hosts
- Migrate your SQL DB and VMM with shared nothing livemigration to the new Hyper-V Cluster
- Reconfigure the standalone Hyper-V Host with VMM and add it to the cluster
- Run the cluster validation again
That’s all and it will cost you the same amount of time.
Today I had an issue with virtual machine converter during migration of a VM from VMware to Hyper-V. The screenshot below shows the issue.
The issue shows an configuration missmatch which stopped the convert of the VM.
The solution of the issue is pretty easy. You need to check and change the virtual machine configuration in VMware vCenter.
First you need to check the SCSI Bus logic. There you need to configure LSI Logic.
Second and more likely the reason for the issue is that one or more of your disks are configured as independed. Just uncheck the box and your fine. 🙂
Sometimes when I’m invited to visit a customer to “optimize their high available virtual machine manager”, I normally see following configuration.
When I ask why they say it is high available, they normally tell me that they can move the machine from one host to another. Normally i ask now “And what happens when you need to patch the SQL DB, VMM or Windows Server or the storage fail?”
Here comes the point where most people realize that high availability means other things than moving services from A to B.
So now let us think what we need to get our VMM Server high available.
On the VMM Site we need following parts:
- two VMM Management Servers running in a Cluster
- two Database Servers running in a Cluster
- two Fileserver running in a Cluster as Library server
- two Hyper-V Hosts for VM Placement
- two Storages with Storage Replication
When it comes to a very big Hyper-V and VMM Environment, I would suggest to run you Management Systems in a separated Hyper-V Cluster. That helps you to keep your VM workload running even when you need to take down your fabric cluster in maintenance mode.