Windows Server User Group Treffen am 11. September bei der Microsoft am Standort Berlin

Hallo an alle Microsoft Enthusiasten da draußen,

so nach einer längeren Pause nun unser zweites Treffen. Diesmal wie versprochen auch in den neuen Räumlichkeiten von Microsoft und zwar unter den Linden 17. Hierfür hat uns Gernot den großen Meetingraum in der Startup Area organisert. Durch die freundliche Unterstützung von Gernot und Oliver ist für das leibliche  Wohl gesorgt und keiner wird verdursten.

Bitte registriert euch für das Event unter dem folgenden Link. !!!Anmeldung zum Event!!!

Zu unserem zweiten Event haben wie Benedict Berger (MVP Hyper-V) gewinnen können.

Der erste Vortrag des Abends wird gehalten von Benedict und ist ein Überblick zur Hybrid Cloud für IT Pros. Durch Benedicts Erfahrungen Rund um Azure, Azure Pack und System Center, wird es sicher ein sehr interessanter Vortrag.

Der zweite Vortrag wird Florian Klaffenbach gehalten. Thema wird die Anbindung von Azure an on premise Infrastrukturen sein. Hierbei liegt der Fokus auf  Microsoft Azure Express Route.

Zusätzlich plant mit Evgenij Smirnov noch eine kleine Diskussionrunde um eure Vorschläge und Themen aufzugreifen.

Die Vorträge sind technisch und Praxis orientiert.

Als Experten für Fragen sind auch noch folgende Personen für euch Vorort.

Name  Job/th>  Unternehmen Themen Gebiet
Florian “Flo” Klaffenbach Microsoft Solutions Architect &
Founder Windows Server UG Berlin
CGI (Germany) – Windows Server Hyper-V
– Windows Server Storage Services (SMB, DFS, SoFS, Fileserver etc.)
– Windows Server Clustering
– System Center Virtual Machine Manager
– Data Center Management und Data Center Design (inkl. Server, Storage und Netzwerktechnik)
Manuel Bräuer System Engineer &
Co-Founder Windows Server UG Berlin
Astendo GmbH – Windows Server Hyper-V
– Microsoft Exchange Server
– Microsoft Azure Pack
– Data Center Design (inkl. Server, Storage und Netzwerktechnik) und Hosting Services
Oliver Klimkowsky Microsoft Section Manager Region Nord,
Microsoft Senior Solutions Architekt &
Verantwortlicher für User Communities Berlin
CGI (Germany) – Microsoft Exchange Server
– Microsoft Active Directory Domain Services
– Hosting Solutions
Gernot Richter Datacenter Solution Sales Microsoft Deutschland GmbH – Microsoft Datacenter Produkte und Services

Dann bleibt mir nur noch euch viel Spaß zu wünschen und ich freu mich schon auf euer zahlreiches Erscheinen.

Beste Grüße
Flo

My planned test environment #2 domain Structure

My test Environment grows. Today I finished the domain structure.

Domain Structure

Domain Structure

I created a one forest, a root domain and two child domains.

The root domain only consist of two domaincontrollers and has no other servers or services at the moment.

The first child domain is my resource domain for physical systems that I need for the lab. It holds my hyper-v hosts, storage systems, switches, firewalls and router. So now you will ask yourself “why so complex and two domains?”. I like to follow some security best practices. One is, that you should split administrativ rights for your Hyper-V hosts and storage systems. That means no administrator who is not part of the environment and allowed to make changes on that systems, should be able to connect to them. The easiest way is to creat a resource and work domain. Both have different administator accounts and because of the root domain and the restricted access to it, you cannot deligate administrators on other domains.

That also prevents your application servers and active directory from corruption, from someone who maybe have occupied your Hyper-V and physical systems.

 

Testing a Synology DS1813+ Part# 4 – Performance testing

Last week it was at the time to send back my toy the Synology RS2414+ but I have still my DS1813+. So I go on testing.

The DS1813+ runs currently as my central iSCSI target and holds one SMB 2.0 share.

iSCSI LUNs

iSCSI LUNs

SMB Volume

SMB Volume

All LUNs and volumes are distributed over a disc group with four 1 TB Harddisk and 5.400/7.200 RPM (estimated 98/102 IOPS each).

Disk Group

Disk Group

Addionaly I configured SSD caching for iSCSI VM01 to increase performance. SSD Cache directly impacts memory usage of a Synology system. So I’m currently limited on 124GB SSD cache because I only have 2GB of memory.

SSD cache

SSD cache

Now to my testing scenario. I had a basic load on my DS with six running VMs, where the VM files and VHD are placed on the DS.

VMs

VMs

The basic load on my DS and Hyper-V host you can see in the screenshots.

Load Hyper-V Host

Load Hyper-V Host

Load DS1813+

Load DS1813+

Now I started two storage live migration from running VMs. Both consume together around 30GB of storage. The data is transfered from my local Hyper-V host with tiered storage space via 1GBit/s SAN network (converged network over three 1Gbit/s NICs and 1GBit/s bandwidth guaranteed). The DS has three 1GBit/s NICs as Bond connected to my SAN network.

In the screenshots you can see how the resource consumtion increases.

Hyper-V Load during test.

Hyper-V Load during test.

DS load during test.

DS load during test.

Conclusion: The DS1813+ is no complete enterprise system but for small/medium environments and testing purpose, the performance is more than great. I really enjoy that piece of toy.

How to configure prevered cluster node in failover cluster manager

If you use the failover cluster manager in Windows Server 2012 R2 you have the option to set a prefered cluster node to hold your cluster roles like virtual machines.

Why should you use this option? Easy answer, there are scenarios where you want to prevent cluster roles to run on the same node. In my case I want to prevent to run guest cluster nodes and virtual domain controllers on the same node.

So if one node failes, I have the failover virtual machine running on the other hyper-v cluster node.

You can easily configure the option in the grafical interface. To do so, you can follow my screenshots.

Open the failover cluster manager and click right on the VM or role you want to change

Open the failover cluster manager and click right on the VM or role you want to change

2_cut

In the context menu select Properties

 

In the properties - general menu you can select the prefered cluster node or nodes. After selecting the node please click on Failover

In the properties – general menu you can select the prefered cluster node or nodes. After selecting the node please click on Failover

 

Now you should configure the fallback option. I would suggest to configure fallback and a period from 1 houre before you VM or roles falls back to prefered node. That prevents you to run into issues if a node failes again short after   he returns to the cluster.

Now you should configure the fallback option. I would suggest to configure fallback and a period from 1 houre before you VM or roles falls back to prefered node. That prevents you to run into issues if a node failes again short after he returns to the cluster. Click OK and your done.

 

 

Testing a Synology RS2414+ & DS1813+ Part# 3 – HD Carrier & 6TB issue

Earlier today I got tweet to my post “Testing a Synology RS2414+ & DS1813+ Part# 2 – HD Carrier“, from my friend Andreas Erson. He asked me about the diskcarrier and if they are able to handle the new 6TB harddisk mounting standard.

Capture

To read the report, please click the link.

Unlikly I wasn’t able to test it on my own because I currently have no 6TB disks. So I contacted Phillip Rankers from Synology again.

He told me that they updated the carrier and every system that was bought after June 2014 is able to handle the 6TB standard. For every other disk, Synology offers help to find a solution. So if you have an older system and want to use 6TB disk, you only need to give Synology a call and they will help.

Source: http://www.storagereview.com/

Source: http://www.storagereview.com/