Hyper-V VMs not starting after startup repair

After a lazy friday and some try and distroy with my test Hyper-V cluster. I needed to do a startup repair with my Hyper-V testcluster. After that I encountered a funny surprise, when I tried to startup my virtual machines.

They were telling me they are not able to start because my Hypervisor was not running.

1

 

2

 

In the first step I thought “Ok could be possible that VT is disabled in BIOS”. That was possible because I made some Firmware and BIOS updates during the session and my old collegues from Dell like to joke and to enable and disable BIOS features during updates. So I checked BIOS. OK … everything fine.

I restarted my host and check the services. No, everything fine here either.

After that I did a few minutes research via web and what did I found? An article from Ben Armstrong himself, encountering the same issue.

 

So what was the reason why my VMs do not want to start? As you all know, when you startup a Hyper-V host, you don’t see the Hyper-V on your desktop, you see the management OS. Which means, you have something like a VM running, which manages the Hypevisor below it. What happens when you do a startup repair? You recreate the boot storage and link it directly to the installed operating system. The issue is, the startup repair don’t know that you use a Hypervisor. Which means, it doesn’t set the parameter to startup the Hypervisor first bevor it starts the management OS.

Th fix this, you need to do some easy steps.

1. Start a command promt (cmd.exe) as administrator

2. Type bcdedit /set hypervisrolauchtype auto

3. reboot your system

If you use dual boot or something, you need to specify the bootloader identifier. How that works, is shown in a blog from Keith Combs.

3

 

Windows Server User Group Treffen am 11. September bei der Microsoft am Standort Berlin

Hallo an alle Microsoft Enthusiasten da draußen,

so nach einer längeren Pause nun unser zweites Treffen. Diesmal wie versprochen auch in den neuen Räumlichkeiten von Microsoft und zwar unter den Linden 17. Hierfür hat uns Gernot den großen Meetingraum in der Startup Area organisert. Durch die freundliche Unterstützung von Gernot und Oliver ist für das leibliche  Wohl gesorgt und keiner wird verdursten.

Bitte registriert euch für das Event unter dem folgenden Link. !!!Anmeldung zum Event!!!

Zu unserem zweiten Event haben wie Benedict Berger (MVP Hyper-V) gewinnen können.

Der erste Vortrag des Abends wird gehalten von Benedict und ist ein Überblick zur Hybrid Cloud für IT Pros. Durch Benedicts Erfahrungen Rund um Azure, Azure Pack und System Center, wird es sicher ein sehr interessanter Vortrag.

Der zweite Vortrag wird Florian Klaffenbach gehalten. Thema wird die Anbindung von Azure an on premise Infrastrukturen sein. Hierbei liegt der Fokus auf  Microsoft Azure Express Route.

Zusätzlich plant mit Evgenij Smirnov noch eine kleine Diskussionrunde um eure Vorschläge und Themen aufzugreifen.

Die Vorträge sind technisch und Praxis orientiert.

Als Experten für Fragen sind auch noch folgende Personen für euch Vorort.

Name  Job/th>  Unternehmen Themen Gebiet
Florian “Flo” Klaffenbach Microsoft Solutions Architect &
Founder Windows Server UG Berlin
CGI (Germany) - Windows Server Hyper-V
– Windows Server Storage Services (SMB, DFS, SoFS, Fileserver etc.)
– Windows Server Clustering
– System Center Virtual Machine Manager
– Data Center Management und Data Center Design (inkl. Server, Storage und Netzwerktechnik)
Manuel Bräuer System Engineer &
Co-Founder Windows Server UG Berlin
Astendo GmbH - Windows Server Hyper-V
– Microsoft Exchange Server
– Microsoft Azure Pack
– Data Center Design (inkl. Server, Storage und Netzwerktechnik) und Hosting Services
Oliver Klimkowsky Microsoft Section Manager Region Nord,
Microsoft Senior Solutions Architekt &
Verantwortlicher für User Communities Berlin
CGI (Germany) - Microsoft Exchange Server
– Microsoft Active Directory Domain Services
– Hosting Solutions
Gernot Richter Datacenter Solution Sales Microsoft Deutschland GmbH - Microsoft Datacenter Produkte und Services

Dann bleibt mir nur noch euch viel Spaß zu wünschen und ich freu mich schon auf euer zahlreiches Erscheinen.

Beste Grüße
Flo

My planned test environment #2 domain Structure

My test Environment grows. Today I finished the domain structure.

Domain Structure

Domain Structure

I created a one forest, a root domain and two child domains.

The root domain only consist of two domaincontrollers and has no other servers or services at the moment.

The first child domain is my resource domain for physical systems that I need for the lab. It holds my hyper-v hosts, storage systems, switches, firewalls and router. So now you will ask yourself “why so complex and two domains?”. I like to follow some security best practices. One is, that you should split administrativ rights for your Hyper-V hosts and storage systems. That means no administrator who is not part of the environment and allowed to make changes on that systems, should be able to connect to them. The easiest way is to creat a resource and work domain. Both have different administator accounts and because of the root domain and the restricted access to it, you cannot deligate administrators on other domains.

That also prevents your application servers and active directory from corruption, from someone who maybe have occupied your Hyper-V and physical systems.

 

Testing a Synology DS1813+ Part# 4 – Performance testing

Last week it was at the time to send back my toy the Synology RS2414+ but I have still my DS1813+. So I go on testing.

The DS1813+ runs currently as my central iSCSI target and holds one SMB 2.0 share.

iSCSI LUNs

iSCSI LUNs

SMB Volume

SMB Volume

All LUNs and volumes are distributed over a disc group with four 1 TB Harddisk and 5.400/7.200 RPM (estimated 98/102 IOPS each).

Disk Group

Disk Group

Addionaly I configured SSD caching for iSCSI VM01 to increase performance. SSD Cache directly impacts memory usage of a Synology system. So I’m currently limited on 124GB SSD cache because I only have 2GB of memory.

SSD cache

SSD cache

Now to my testing scenario. I had a basic load on my DS with six running VMs, where the VM files and VHD are placed on the DS.

VMs

VMs

The basic load on my DS and Hyper-V host you can see in the screenshots.

Load Hyper-V Host

Load Hyper-V Host

Load DS1813+

Load DS1813+

Now I started two storage live migration from running VMs. Both consume together around 30GB of storage. The data is transfered from my local Hyper-V host with tiered storage space via 1GBit/s SAN network (converged network over three 1Gbit/s NICs and 1GBit/s bandwidth guaranteed). The DS has three 1GBit/s NICs as Bond connected to my SAN network.

In the screenshots you can see how the resource consumtion increases.

Hyper-V Load during test.

Hyper-V Load during test.

DS load during test.

DS load during test.

Conclusion: The DS1813+ is no complete enterprise system but for small/medium environments and testing purpose, the performance is more than great. I really enjoy that piece of toy.

How to configure prevered cluster node in failover cluster manager

If you use the failover cluster manager in Windows Server 2012 R2 you have the option to set a prefered cluster node to hold your cluster roles like virtual machines.

Why should you use this option? Easy answer, there are scenarios where you want to prevent cluster roles to run on the same node. In my case I want to prevent to run guest cluster nodes and virtual domain controllers on the same node.

So if one node failes, I have the failover virtual machine running on the other hyper-v cluster node.

You can easily configure the option in the grafical interface. To do so, you can follow my screenshots.

Open the failover cluster manager and click right on the VM or role you want to change

Open the failover cluster manager and click right on the VM or role you want to change

2_cut

In the context menu select Properties

 

In the properties - general menu you can select the prefered cluster node or nodes. After selecting the node please click on Failover

In the properties – general menu you can select the prefered cluster node or nodes. After selecting the node please click on Failover

 

Now you should configure the fallback option. I would suggest to configure fallback and a period from 1 houre before you VM or roles falls back to prefered node. That prevents you to run into issues if a node failes again short after   he returns to the cluster.

Now you should configure the fallback option. I would suggest to configure fallback and a period from 1 houre before you VM or roles falls back to prefered node. That prevents you to run into issues if a node failes again short after he returns to the cluster. Click OK and your done.

 

 

Testing a Synology RS2414+ & DS1813+ Part# 3 – HD Carrier & 6TB issue

Earlier today I got tweet to my post “Testing a Synology RS2414+ & DS1813+ Part# 2 – HD Carrier“, from my friend Andreas Erson. He asked me about the diskcarrier and if they are able to handle the new 6TB harddisk mounting standard.

Capture

To read the report, please click the link.

Unlikly I wasn’t able to test it on my own because I currently have no 6TB disks. So I contacted Phillip Rankers from Synology again.

He told me that they updated the carrier and every system that was bought after June 2014 is able to handle the 6TB standard. For every other disk, Synology offers help to find a solution. So if you have an older system and want to use 6TB disk, you only need to give Synology a call and they will help.

Source: http://www.storagereview.com/

Source: http://www.storagereview.com/

Testing a Synology RS2414+ & DS1813+ Part# 2 – HD Carrier

My testing series on Synology goes on. During the last weeks I bought an DS1813+ to extend my lab.

That brought me in the good position to check out a very small but important part of the system. I really like the HD carrier Synology uses. There are currently two types I know. The Diskstation Carrier and the Rackstation Carrier. Both are able to carrier 3,5″ and 2,5″ HDs without any additional mounting kits.

The only thing that changes is the position of the screws for the RS Carrier and for the DS Carrier I needed to remove one fix clamp and fix the 2,5″ drive with a screw.

So at the end a good solution. :)

WP_20140725_003

Synology RS HD Carrier

WP_20140804_001

Synology DS Carrier with 2,5″ SSD

My planned test environment #1 physical Structure

To answer some question on the testsystems I use. I want to give you a short overview about my planned testenvironement and which systems are currently in place.

Currently in place
DCF-SVR-HV01
USB Backupdisks
Brotback
Jetfire
Switche, Firewall and Router
Planned until end 2014
DCF-SVR-HV02 / DCF-SVR-HV03
Synology Storage
Untitled

Physical Environment (Storage, Switche & Hyper-V Hosts)

1

Virtual Network & Converged Networking

Testing a Synology RS2414+ Part# 1 – Story and System

As some of you know, I currently build up small homelab for the Windows Server User Group Berlin and my studies.

What I miss so far is a good and keen storage system. After some research and talking with friends and coworkers, I decided to check out Synology.

So I asked Synology if they could lent me a system with four or more disk and Windows Server 2012 and R2 cluster support.

After a few days I got a mail from Synology Product Manager Phillip Rankers, who offered me to lent me a Synology RS2414+. So that gives me the opportunity to test it a few weeks.

Synology is no classic enterprise Storage vendor like Dell, HP, NetApp or EMC but they offer great NAS Systems. To learn more about Synology click here.

At first I want to tell you something about the techspecs of the system.

CPU
CPU Model Intel Atom
CPU Frequency Dual Core 2.13 GHz
Memory
System Memory 2 GB DDR3
Memory Module Pre-installed 2 GB X 1
Total Memory Slots 2
Memory Expandable up to 4 GB (2 GB X 2)
Storage
Drive Bay(s) 12
Max. Drive Bays with Expansion Unit 24
Compatible Drive Type
  • 3.5″ SATA(III) / SATA(II) HDD
  • 2.5″ SATA(III) / SATA(II) HDD
  • 2.5″ SATA(III) / SATA(II) SSD
Max. Internal Capacity 72 TB
External Ports
USB 2.0 Port 2
USB 3.0 Port 2
Expansion Port 1
File System
Internal Drives EXT4
External Drives
  • EXT4
  • EXT3
  • FAT
  • NTFS
  • HFS+
Appearance
Size (Height X Width X Depth) 88 mm X 445 mm X 570 mm
Weight 12.2 kg <- I have this model
14.2 kg (for RP model) <- longer version
Others
LAN Number (RJ45) Gigabit X 4
Link Aggregation
Wake on LAN/WAN
System Fan 80 mm X 80 mm X 4 pcs
Easy Replacement System Fan
Wireless Support (dongle)
Noise Level 40.5 dB(A)
42.2 dB(A) (for RP model)
Power Recovery
Scheduled Power On/Off
Power Supply Unit / Adapter 400W
2 X 400W (for RP model)
AC Input Power Voltage 100V to 240V AC
Power Frequency 50/60 HZ, Single Phase
Power Consumption 105W (Access)
43W (HDD Hibernation)
125W (Access, for RP model)
57W (HDD Hibernation, for RP model)
Redundant Power Supply (for RP model)
Warranty 3 Years

The price for that system without disks and expansion unit starts at around 1.500,00€. 

So on the first view, the system looks quit nice and is easy to understand. Synology staff shows an outstandig performance in customer handling and customer engagement.

I will keep you posted how my testing goes on.

 

Free ebook: Microsoft Azure Media Services Guidance

Source:  Microsoft

You can download the free ebook here.

This guide describes how to design and build a video-on-demand application that uses Media Services. It provides an end to end walkthrough of a Windows Store application that communicates with Media Services through a web service, and explains key design and implementation choices. In addition to the Windows Store application, the guide also includes client applications for Windows Phone, web, iOS, and Android devices.