Nominated for European Commission Expert Panel – SDN & NFV

Hi everybody,

last week something awesome happend to me. I was ask by the European Commission (the Executive of the European Union) to join their expert panel regarding Software Defined Network and Network Function Virtualisation. Therefor I will travel to brussel for some workshops and take part in some online surveys. I don’t know how I deserved it but I really love to take chance to meet other experts and discuss with them.

If you want to read more about that project, please click on the picture.

 

Hyper-V|W2k12R2|8x1GB|2x10GB

Hyper-V Cluster Network configuration with following parameters:

The following configuration leverages 8x 1GB Ethernet NICs and LOM (LAN on Motherboard) and 2x 10GB Ethernet NICs. The storage can be connected via iSCSI with MPIO or SMB 3.x.x without RDMA. The configurations uses physical configuration and software defined / converged network for Hyper-V.


 Pro’s and Con’s of that solution

 Pro Con
– Good Bandwidth for VM
– Good Bandwidth for Storage
– Separated NICs for Livemigration, Cluster and Management
– Full fault redundant
– Can be used in switch independent or LACP (with stacked switches) teaming mode
– Only one Hardware Technology is used
– Network becomes limited by a large number of VMs
– Combination between Hardware Defined and Software Defined Network is sometimes hardly to understand

 Switches

1GBE SW01 1 GBit/s physical stacked or independed
1GBE SW02 1 GBit/s physical stacked or independed
10GBE SW01 10 GBit/s physical stacked or independed
10GBE SW02 10 GBit/s physical stacked or independed
SoftSW01 10 GBit/s Software defined / converged
SoftSW02 10 GBit/s Software defined / converged

 Neccessary Networks

Networkname VLAN IP Network (IPv4) Connected to Switch
Management 10 10.11.10.0/24 SoftSW01
Cluster 11 10.11.11.0/24  SoftSW01
Livemigration 45 10.11.45.0/24 1GBE SW01 / 1GBE SW02
With iSCSI – Storage 40 10.11.40.0/24 10GBE SW01 / 10GBE SW02
With SMB – Storage 5051 10.11.50.0/2410.11.51.0/24 10GBE SW0110GBE SW02
Virtual Machines 200 – x 10.11.x.x/x 1GBE SW01 / 1GBE SW02

 Possible rearview Server


 Schematic representation


Switch Port Configuration

   

Bandwidth Configuration vNICs

vNIC min. Bandwidth Weight PowerShell Command
Management 10%
Cluster 5%

QoS Configuration Switch

Networkname Priority
Management medium
Cluster high
Storage high
Livemigration medium
VMs dependig on VM Workload

 

HyperV|W2k12R2|3x1GB

Done use that configuration in production! It’s only for lab use!

This configuration is very similar to the 4x 10Gb Ethernet config but not useable for production.

Hyper-V Cluster Network configuration with following parameters:

The following configuration leverages 3x 1GB Ethernet NICs and LOM (LAN on Motherboard). The storage can be connected via iSCSI with MPIO or SMB 3.x.x without RDMA. The configurations uses physical configuration and software defined / converged network for Hyper-V.


 Pro’s and Con’s of that solution

 Pro Con
– Easy to build and good for lab use
– enougth bandwidth for test workloads
– fully converged
– limited bandwidth
– no production use
– cheap to build

 Switches

1GBE SW01 1 GBit/s physical stacked or independed
1GBE SW02 1 GBit/s physical stacked or independed
SoftSW01 1 GBit/s Software defined / converged

 Neccessary Networks

Networkname VLAN IP Network (IPv4) Connected to Switch
Management 100 10.11.100.0/24 SoftSW01
Cluster 101 10.11.101.0/24  SoftSW01
Livemigration 450 10.11.45.0/24 SoftSW01
Storage 400 10.11.40.0/24 SoftSW01
Virtual Machines 200 – x 10.11.x.x/x SoftSW01

 Possible rearview Server


 Schematic representation


Switch Port Configuration


Bandwidth Configuration vNICs

vNIC min. Bandwidth Weight PowerShell Command
Management 10%
Cluster 5%
iSCSI 30%
Livemigration 30%

QoS Configuration Switch

Networkname Priority
Management medium
Cluster high
Storage high
Livemigration medium
VMs dependig on VM Workload

 

Hyper-V|W2k12R2|4x10GB

Hyper-V Cluster Network configuration with following parameters:

The following configuration leverages 4x 10GB Ethernet NICs and LOM (LAN on Motherboard). The storage can be connected via iSCSI with MPIO or SMB 3.x.x without RDMA. The configurations uses physical configuration and software defined / converged network for Hyper-V.


 Pro’s and Con’s of that solution

 Pro Con
– High Bandwidth for VM- Good Bandwidth for Storage
– Software Defined Network can be fully controlled from the Hypervisor
– Full fault redundant
– Can be used in switch independent or LACP (with stacked switches) teaming mode
– Only one Hardware Technology is used
– Becomes limited by a large number of VMs
– Network admins won’t like it
– Combination between Hardware Defined and Software Defined Network is sometimes hardly to understand

 Switches

10GBE SW01 10 GBit/s physical stacked or independed
10GBE SW02 10 GBit/s physical stacked or independed
SoftSW01 10 GBit/s Software defined / converged

 Neccessary Networks

Networkname VLAN IP Network (IPv4) Connected to Switch
Management 100 10.11.100.0/24 SoftSW01
Cluster 101 10.11.101.0/24  SoftSW01
Livemigration 450 10.11.45.0/24 SoftSW01
Storage 400 10.11.40.0/24 SoftSW01
Virtual Machines 200 – x 10.11.x.x/x SoftSW01

 Possible rearview Server


 Schematic representation


Switch Port Configuration


Bandwidth Configuration vNICs

vNIC min. Bandwidth Weight PowerShell Command
Management 10%
Cluster 5%
iSCSI 30%
Livemigration 30%

QoS Configuration Switch

Networkname Priority
Management medium
Cluster high
Storage high
Livemigration medium
VMs dependig on VM Workload

 

Hyper-V|W2k12R2|4x1GB|2x10GB|2xFC

Hyper-V Cluster Network configuration with following parameters:

The following configuration leverages 4x 1GB Ethernet, 2x 10GB Ethernet NICs and LOM (LAN on Motherboard) and 2x Fibre channel connections. The storage can be connected via Fibre Channel with MPIO. The configurations uses physical configuration and software defined / converged network for Hyper-V.


 Pro’s and Con’s of that solution

 Pro Con
– High Bandwidth for VM- Good Bandwidth for Storage
– Separated Management and Heartbeat Interface
– Full fault redundant
– Can be used in switch independent or LACP (with stacked switches) teaming mode
– Fibrechannel ist most common SAN technology
– Limited Bandwidth for Livemigration
– a lot of NICs and Switches needed
– a lot of technologies involved
– expensiv

 Switches

Switch name Bandwidth Switchtyp
1GBE SW01 1 GBit/s physical stacked or independed
1GBE SW02 1 GBit/s physical stacked or independed
10GBE SW01 10 GBit/s physical stacked or independed
10GBE SW02 10 GBit/s physical stacked or independed
FC SW01 4/8 GB FC/s physical stacked or independed
FC SW02 4/8 GB FC/s physical stacked or independed
SoftSW01 1 GBit/s Software defined / converged
SoftSW02 10 GBit/s  Software defined / converged

 Neccessary Networks

Networkname VLAN IP Network (IPv4) Connected to Switch
Management 100 10.11.100.0/24 SoftSW01
Cluster 101 10.11.101.0/24  SoftSW01
Livemigration 450 10.11.45.0/24  1GBE SW01 & 1GBE SW02
Virtual Machines 200 – x 10.11.x.x/x  SoftSW02

 Possible rearview Server

RearSvr02


 Schematic representation

NIC01

Switch Port Configuration

 sw1gbe  
 

Bandwidth Configuration vNICs

vNIC min. Bandwidth Weight PowerShell Command
Management 40%
Cluster 10%

QoS Configuration Switch

Networkname Priority
Management medium
Cluster high
Livemigration medium
VMs dependig on VM Workload