Dell PowerEdge VRTX Networking

This article is written by @DellTechCenter Team. 

Dell PowerEdge VRTX is a converged infrastructure product focused on remote, branch, and small office requirements.

This document outlines the configuration of the Dell PowerEdge VRTX 1GbE switch I/O Module (IOM) to establish basic connection to the local network. Dell PowerEdge VRTX can be configured with an integrated 1GbE pass-through switch module or an integrated 1GbE switch module. The 1GbE switch module is recommended for most applications.

You can download the guide here.

What’s new in Windows Server 2012 R2 networking?

What's new in Windows Server 2012 R2 (RTM)?
What's new in Windows Server 2012 R2 Storage?
What's new in Windows Server 2012 R2 Server Virtualization?
What's new in Windows Server 2012 R2 Networking?
What's new in Windows Server 2012 R2 Server Management and Automation?
What's new in Windows Server 2012 R2 VDI?
What's new in Windows Server 2012 R2 Access and Information Protection?
What's new in Windows Server 2012 R2 Web Application and Platform?
What's New in Windows Server 2012 R2 Essentials?
Whats new in Windows Server 2012 R2 in Web Application and Platform, Active Directory, Print Services and Clustering?

What’s New in Networking in Windows Server 2012 R2?

The following networking technologies are new or improved in Windows Server® 2012 R2 Preview.

802.1X Authenticated Wired Access in Windows 8.1 Preview and Windows Server 2012 R2 Preview provides new features and capabilities over previous versions.

For more information, see What’s New in 802.1X Authenticated Wired Access for Windows Server 2012 R2.

802.1X Authenticated Wireless Access in Windows 8.1 Preview and Windows Server 2012 R2 Preview provides new features and capabilities over previous versions.

For more information, see What’s New in 802.1X Authenticated Wireless Access in Windows Server 2012 R2.

Domain Name System (DNS) in Windows Server 2012 R2 Preview provides new features and capabilities over previous versions.

For more information, see What’s New in DNS Server in Windows Server 2012 R2.

Dynamic Host Configuration Protocol (DHCP) in Windows Server 2012 R2 Preview provides new features and capabilities over previous versions.

For more information, see What’s New in DHCP in Windows Server 2012 R2.

Hyper-V Network Virtualization (HNV) has many important updates that enable hybrid cloud and private cloud solutions.

For more information, see What’s New in Hyper-V Network Virtualization in Windows Server 2012 R2.

Hyper-V Virtual Switch provides new features and capabilities over previous versions.

For more information, see What’s New in Hyper-V Virtual Switch in Windows Server 2012 R2.

IP Address Management (IPAM) is a feature that was first introduced in Windows Server 2012 that provides highly customizable administrative and monitoring capabilities for the IP address infrastructure on a corporate network. IPAM in Windows Server 2012 R2 Preview includes many enhancements.

For more information, see What’s New in IPAM in Windows Server 2012 R2.

Remote Access provides new features and capabilities over previous versions.

For more information, see What’s New in Remote Access in Windows Server 2012 R2.

New in Windows Server 2012 R2 Preview, virtual Receive-side Scaling (vRSS) enables network adapters to distribute network processing load across multiple virtual processors in multi-core virtual machines (VMs).

For more information, see Virtual Receive-side Scaling in Windows Server 2012 R2.

New in Windows Server 2012 R2 Preview, Windows Server Gateway is a virtual machine (VM)-based software router that allows Cloud Service Providers (CSPs) and Enterprises to enable datacenter and cloud network traffic routing between virtual and physical networks, including the Internet.

Windows Server Gateway routes network traffic between the physical network and VM network resources, regardless of where the resources are located. You can use Windows Server Gateway to route network traffic between physical and virtual networks at the same physical location or at many different physical locations, providing network traffic flow in private and hybrid cloud scenarios.

For more information, see Windows Server Gateway.

 

What’s New in IPAM in Windows Server 2012 R2?

Feature/functionality New or improved Description
Role based access control New Role based access control enables you to customize the types of operations and access permissions for users and groups of users on specific objects.
Virtual address space management New IPAM streamlines management of physical and virtual IP address space in System Center Virtual Machine Manager.
Enhanced DHCP server management Improved Several new operations are available in IPAM to enhanced the monitoring and management of the DHCP Server service on the network.
External database support New In addition to Windows Internal Database (WID), IPAM also optionally supports the use of a Microsoft SQL database.
Upgrade and migration support New If you installed IPAM on Windows Server 2012, your data is maintained and migrated when you upgrade to Windows Server 2012 R2 Preview.
Enhanced Windows PowerShell support Improved Windows PowerShell support for IPAM is greatly enhanced to provide extensibility, integration, and automation support.

What’s New in DHCP in Windows Server 2012 R2?

Feature/functionality New or improved Description
DNS registration enhancements New You can use DHCP policies to configure conditions based on the fully qualified domain name (FQDN) of DHCP clients, and to register workgroup computers using a guest DNS suffix.
DNS PTR registration options New You can enable DNS registration of address (A) and pointer (PTR) records, or just enable registration of A records.
Windows PowerShell for DHCP server Improved New Windows PowerShell cmdlets are available.

What’s New in DNS Server in Windows Server 2012 R2

Feature/functionality New or improved Description
Enhanced zone level statistics Improved Zone level statistics are available for different resource record types, zone transfers, and dynamic updates.
Enhanced DNSSEC support Improved DNSSSEC key management and support for signed file-backed zones is improved.
Enhanced Windows PowerShell support Improved New Windows PowerShell parameters are available for DNS Server.

Poor network performance in VM when creating a virtual switch and using broadcom NIC with Windows Server 2012

This issue is resolved please read post: http://datacenter-flo.azurewebsites.net/?p=2050

 

Some customers reported me about performance issue with virtual machines running on Hyper-V V3 (Windows Server 2012) after creating a switch.

Together with colleagues we found out that the issue only appears with Broadcom network interface cards.

We saw that the issue is related by “Virtual Machine Queues” enabled on the networkadapter.

If you are facing this issue, please try to disable “Virtual Machine Queues” first on the virtual NIC in your VM. If this doesn’t resolve your issue, please disable “Virtual Machine Queues” also on the physical NIC of your server.

The issue should be fixed with a Broadcom firmware and driver update for the NIC.

You can do this in the Adapter Properties of the Network Interface Card.

Adapter Properties in BASC

Adapter Properties in BASC

Adapter Properties of a NIC

Adapter Properties of a NIC

 

 

How to create integrated NIC Teaming with Windows Server 2012 via GUI

One new and big feature in Windows Server 2012 is the integrated NIC Teaming. In the past when you want to team network interfaces in Windows, you need special 3rd party software e.g. like Broadcom Advanced Control Suite (BACS). But this brought us some problems e.g. when you run Group policies against teamed devices. Mostly the policy was not executed, I will explain this maybe in a later blogpost. Microsoft has also published some Hyper-V Scenarios where NIC Teaming is not supported.

Now this is no problem anymore.

With integrated NIC Teaming in Windows Server 2012 you can team NIC or even LOM. You can also Team different Vendors (e.g. Intel or Broadcom) together in one Team. In the past this was very difficult, unstable and also not supported. The only software that is able to do this, is Intel Advanced Networking Services (ANS) Teaming Software.

Let me show you how to configure it via GUI.

 

1. Open the Server Manager and click on “Configure this local Server”.

2. Now click on “disabled” at the point NIC Teaming to start the Wizzard.

3. Now you the Wizzard starts and you have a few options to add NICs to a team.

4. How to add a NIC to a Team.

Selection over NIC box:

 

a. Mark all NICs you want to use in your team by selecting them out of the right box with shift+click.

 

b. click “Tasks” and  “Add to new Team”

c. Now you see that the NICs that you selected before, are marked with a hook.

Type in a Team name.

After this open “Additional properties” and go on with Step 5 will popup.

Selection over Team box:

 

a. In the left box click on “Tasks” and than select “New Team”.

b. select all NICs that should be part of the Team.

Type in a Team name.

After this open “Additional properties” and go on with Step 5 will popup.

5. In the Additional properties you can configure more options.

Teaming Mode

switch-dependent modes: require switch ports teaming setting, must on same switch.
switch-independent modes: not require switch ports teaming setting, could on same switch or different switch.

Load balancing mode

Hyper-V switch port: use MAC address of the virtual machine across the team members, bandwidth will be 2 Gbit.

Adress Hash: use hash based on components of the packet on one team member, bandwidth will be 1 Gbit to one traffic, bandwidth will be balanced to other team member for more than one traffic.

Standby mode

Active/Active: failover to other team members
Active/standby: failover to standby team member (you can select on standy adapter)

 

6. By clicking on Primary team Interface: Team1: default VLAN.

Here you can set a Specific VLAN if you need.

7. Click “OK” to close the wizzard and save the configuration.

8. After this you should see the NIC Teaming in you Server Manager.

 

 

How to Configure DHCP Faileover Cluster on Windows Server 2012

All of you know that we didn’t have any Failover option for DHCP in the past. So must of us created different DHCP scopes for one IP range on different servers. This was needed to get a partly redundant DHCP option. This was working but any changes for reservations, scope changes or configuration changes must be done manually or with scripts. This took time or wasn’t really successful.

Now with Windows Server 2012, we get a real Failovercluster including configuration replication. Please notice, the only available options are load balanced and hot standby. I will explain you later when you should use which option.

So let us start to configure our cluster.

 

1. You need to install the first DHCP Server and configure the DHCP Scope. This DHCP Server has to be Windows Server 2012 Standard or Datacenter.

http://datacenter-flo.azurewebsites.net/?p=350

In this scenario I configured the first DHCP Server on Flo-SVR-DC01.

 

2. Next install a new server with Windows Server 2012 or take one other free server from your existing Windows Server 2012 Systems as Windows Server 2012 DHCP Failover Clusterpartner.

How to install a Windows Server 2012 http://datacenter-flo.azurewebsites.net/?p=203

First Configuration of a Windows Server 2012 http://datacenter-flo.azurewebsites.net/?p=222

In my case I installed a fresh Windows Server 2012 VM as Failover Partner.

 

3.  Now your could add the new node to your Server Manager, if you want to manage the Server remote. You can also configure the Failover Setup without this, but it helps to manage both Servers later.

http://datacenter-flo.azurewebsites.net/?p=496

 

4. When the DHCP role was installed correct on the second host and you added the server for management to your management host, than you should see both systems under DHCP.

5. In the next step open the DHCP MMC.

6. In the DHCP MMC please add the second DHCP Server first. You can do this via right click on “DHCP” and than “Add Server”.

http://datacenter-flo.azurewebsites.net/?p=689

7. Now you should see both DHCP Servers in the list.

12. In the next step we authorize the DHCP server to our Domain.

http://datacenter-flo.azurewebsites.net/?p=688

13.  Now click right on the scope that you want to cluster and select “Configure Failover”.

14. When the Wizzard starts, you see the Scope that can be clustered.

If you see no available Scope, you have to check if DHCP Service is up, DHCP Server is complet configured and there are no issues with DNS and ADDS.

 

15. Next step is to selecte the failover partner.

16. When you authorized the second DHCP server before, you see it in the second list. Otherwise you have to select “This Server:” and “Browse”.

17.  Now you type in the name of the server.

18. When you entered the name click “Check Names”. When the wizzard found the server, click “ok”.

19. Click “ok” and the server will attached to DHCP MMC.

20. Now you see the selected Server with complet FQDN in the Patern Server field.

Click “Next” to go on.

 

21. Now you have to set the clusterconfiguration.

Load Blanced:

Relationship Name: Name of your Failover Cluster

Maximum Client Lead Time: Defines the amount of time the surviving server will wait before assuming control of the entire scope.

Mode: Load Balanced – When the cluster is configured in Load Balance mode, this results in an active-active setup of the two DHCP Servers.

You should use when you have big networks with many clients or you want to deploy the cluster in different branch offices.

Load Balance Percentage: Means how the work is splitted up between both hosts. The percentages together can only be 100%. The node with the highest percentage gets the highest workload.

State Switchover Interval: automatically change state to partner down after <time>.

Enable Message Authentification: enables authentification from clusternodes.

Shared Secret:Validation Passwort that identifies the node as partners against each other.

 

Standby:

Relationship Name: Name of your Failover Cluster

Maximum Client Lead Time: Defines the amount of time the surviving server will wait before assuming control of the entire scope.

Mode: Hot Standvy – When the cluster is configured in Hot Standby mode, one node is active and the second is standby and will only take over when the primary DHCP Server failes.

You should use when you need the partner as fault tolerance.

Adresses reserved for standby server: Means how many adresse the standby can lease before he takes over the entire scope and becomes active.

State Switchover Interval: automatically change state to partner down after <time>.

Enable Message Authentification: enables authentification from clusternodes.

Shared Secret: Validation Passwort that identifies the node as partners against each other.

 

22. After klicking “Next” you see a short summary of your configuration.

23.  Klick “Finish” and the cluster configuration starts.

24. In the DHCP MMC click right on the Scope and force a replication by clicking on “Replicate Failover Scope” and than on the refresh  button or press F5.

25. On the failover node check the config. If the config is right your finished.