New free eBook Microsoft System Center Building a Virtualized Network Solution, Second Edition

Hey everbody,

Microsoft offers a new free eBook written by Nigel Cain, Michel Luescher, Damian Flynn, and Alvin Morales.  You can find the book here: Download

What topics are included in this book?

The vast majority of the book is focused on architecture and design, highlighting key design decisions and providing best practice advice and guidance relating to each major feature of the solution.

  • Chapter 1: Key concepts A virtualized network solution built on Windows Server and System Center depends on a number of different features. This chapter outlines the role each of these features plays in the overall solution and how they are interconnected.
  • Chapter 2: Logical networks This chapter provides an overview of the key considerations, outlines some best practice guidance, and describes a process for identifying the set of logical networks that are needed in your environment.
  • Chapter 3: Hyper-V port profiles This chapter discusses the different types of port profiles that are used in Virtual Machine Manager, outlines why you need them and what they are used for, and provides detailed guidance on how and when to create them.
  • Chapter 4: Logical switches This chapter describes the function and purpose of logical switches, which are essentially templates that allow you to consistently apply the same settings and configuration across multiple hosts.
  • Chapter 5: Network Virtualization gateway This chapter outlines key design choices and considerations for providing cross-premises connectivity from networks at tenant sites to virtual networks dedicated (per tenant) in a service provider network.
  • Chapter 6: Deployment This chapter builds on the material discussed in previous chapters and walks through common deployment scenarios, highlighting known issues (and workarounds) relating to the deployment and use of logical switches in your environment.
  • Chapter 7: Operations Even after having carefully planned a virtual network solution, things outside of your immediate control might force changes to your virtualized network solution. This chapter walks you through some relatively common scenarios and provides recommendations, advice, and guidance for how best to deal with them.
  • Chapter 8: Diagnosing Connectivity Issues This chapter looks at how to approach a connectivity problem with a virtualized network solution, the process you should follow to troubleshoot the problem, and some actions you can take to remediate the issue and restore service.
  • Chapter 9: Cloud Platform System network architecture This chapter reviews the design and key decision points for the network architecture and virtualized network solution within the Microsoft Cloud Platform System.

To recap, this book is mainly focused on architecture and design (what is needed to design a virtualized network solution) rather than on the actual steps required to deploy it in your environment. Other than in few chapters, you will find few examples of code. This is by design. Our focus here is not to provide details of how you achieve a specific goal but rather on what you need to do to build out a solution that meets the needs of your business and provides a platform for the future.

When you have designed a solution using the guidelines documented in this book, you will be able to make effective use of some of the excellent materials and examples available in the Building Clouds blog (http://blogs.technet.com/b/privatecloud/) to assist you with both solution deployment and ongoing management.

Using shared VHDx for guest clustering on a storage with no support for Microsoft Cluster & virtualization

Hi everybody,

last weekend my friend Udo Walberer and I had some struggles when setting up a guest cluster with shared VHDx as cluster volumes. We weren’t able to get it work.

We were able to connect the VHDx and to configure it as cluster volume but after a reboot the VMs weren’t starting and we get following failure:

svhdx

 

So after some research wie figured out that the storage which he was using had no support for Microsoft Clustering and Virtualisation as well as Persistent Reservation. After we tested on a supported storage all was fine.

So please before you try to use shared VHDx, check if you storage is supported for some options.

The two screenshots below show you two synology storages, first one without those support and second with support.

25-07-_2015_21-02-41

25-07-_2015_21-03-02

 

 

Differences in Cluster Traffic with different workloads

Hi all,

last week I was at a customer and I had the chance to do some screenshots from two cluster which show how many heartbeat traffic a cluster can produce.

1. A two node cluster with four virtual machines. There you have a great heartbeat with 500 Kbit/s on the network. So pretty low.

Traffic01

 

2. A six node cluster with 215 virtual machines. Without any external action the cluster heartbeat has a continues traffic from around 60 MBit/s. I started some livemigrations and the heartbeat partly jumps up to 150 MBit/s.

Traffic02

 

So my conlusion, the heartbeat traffic multiplies with the load and number of nodes within the cluster.

 

 

Hyper-V|W2k12R2|8x1GB|2x10GB

Hyper-V Cluster Network configuration with following parameters:

The following configuration leverages 8x 1GB Ethernet NICs and LOM (LAN on Motherboard) and 2x 10GB Ethernet NICs. The storage can be connected via iSCSI with MPIO or SMB 3.x.x without RDMA. The configurations uses physical configuration and software defined / converged network for Hyper-V.


 Pro’s and Con’s of that solution

 Pro Con
– Good Bandwidth for VM
– Good Bandwidth for Storage
– Separated NICs for Livemigration, Cluster and Management
– Full fault redundant
– Can be used in switch independent or LACP (with stacked switches) teaming mode
– Only one Hardware Technology is used
– Network becomes limited by a large number of VMs
– Combination between Hardware Defined and Software Defined Network is sometimes hardly to understand

 Switches

1GBE SW01 1 GBit/s physical stacked or independed
1GBE SW02 1 GBit/s physical stacked or independed
10GBE SW01 10 GBit/s physical stacked or independed
10GBE SW02 10 GBit/s physical stacked or independed
SoftSW01 10 GBit/s Software defined / converged
SoftSW02 10 GBit/s Software defined / converged

 Neccessary Networks

Networkname VLAN IP Network (IPv4) Connected to Switch
Management 10 10.11.10.0/24 SoftSW01
Cluster 11 10.11.11.0/24  SoftSW01
Livemigration 45 10.11.45.0/24 1GBE SW01 / 1GBE SW02
With iSCSI – Storage 40 10.11.40.0/24 10GBE SW01 / 10GBE SW02
With SMB – Storage 5051 10.11.50.0/2410.11.51.0/24 10GBE SW0110GBE SW02
Virtual Machines 200 – x 10.11.x.x/x 1GBE SW01 / 1GBE SW02

 Possible rearview Server


 Schematic representation


Switch Port Configuration

   

Bandwidth Configuration vNICs

vNIC min. Bandwidth Weight PowerShell Command
Management 10%
Cluster 5%

QoS Configuration Switch

Networkname Priority
Management medium
Cluster high
Storage high
Livemigration medium
VMs dependig on VM Workload

 

Why you should have a network for cluster heartbeat only!

One topic I see often during my day to day work is that customers forgott to use a cluster network and install it on other networks like livemigration or management.

With my blogpost today I want to explain why you should use a separated cluster network and what you should configure to get it running.

At first, how does a cluster heartbeat work. You can see it like your own heartbeat. Every cluster node sends every second a heartbeat and ask the other nodes after their status. If 5 heartbeats fail within 10 seconds, the cluster will remove the host and migrate workloads.

So what happens if you set cluster heartbeat to for example livemigration. When a livemigration starts, the cluster heartbeat will fail and you livemigration and cluster node will fail.

Ok .. now some MVPs and IT Pro’s say, you can use other networks as fall back heartbeat networks. Yes you can have fallbacks BUT the cluster will try 3 times to bring the heartbeat through and than change to the oher network. Normally the heartbeat will fail there too.

In your own interest, you should use an own cluster network.

Now lets go to the options that you have to create a cluster network.

1. You can use a physical NIC Team for you Cluster Network

Cluster01

2. You can share a NIC Team via additional VLAN Tag on the Team for example with management

Cluster02

3. For Hyper-V you can create an additional virtual NIC for cluster traffic

Cluster03

 

After you created your cluster network, you need to do some more steps to guarantee bandwidth for the cluster heartbeat.

1. enable QoS (Quality of Service) on your network for the cluster network

2. configure network connection binding and cluster communication priority like discibed in my last blogpost How to configure cluster traffic priority on a windows server

3. on a Hyper-V Host or with Virtual Machine Manager you need to set a minimum bandwidth for the cluster network interface. I normally use a minimum of 5 to 10%

PowerShell for Hyper-V Host

In VMM you use the Hyper-V Port Profil “Cluster Workload”

 

So that should do the trick. :)