Top Posts & Pages
- New Blogpost @Altaro – Storage Spaces Direct Series Part 2 – S2D Technologies in Focus
- New Books Published
- @AltaroSoftware Webcast with @workinghardinit & @thomasmaurer – Journey to The Cloud: Masterclass on Cloud Migration
- Storage Spaces Direct Series Part 1 – What is S2D?
- Altaro Webcast with @vBenArmstrong – Hyper-V Deep Dive 24th of april
as many of you already know. Last year around that time I started together with Oliver Michalski (MVP Azure) and Jan-Henrik Damaschke (MVP Cloud & Datacenter Management) to write a book about implementing Azure Solutions. After one year of hart work and many struggles and even more changes because of the rapid development of Azure, the book is now ready for order via Packt and Amazon 🙂
We are very happy with the result. Hopefully you have as much fun ready than we writing the book.
|What this book covers|
|Chapter 1, Getting Started with Azure Implementation, … Within that chapter the reader will get an overview about Cloud service models, Cloud deployment models, Cloud characteristics, and Azure services.|
|Chapter 2, Azure Resource Manager and Tools, … Within that chapter the reader will learn all about the Azure Resource Manager and his concepts (Azure Resource Groups/ Azure Resource Tags/ Locks), The reader will also get an introduction in the working with ARM Templates area.|
|Chapter 3, Deploying and Synchronizing Azure Active Directory, … Within that chapter the reader will get an overview about the deployment, management and functionalities of Azure Active Directory and its relation to a Microsoft Azure Subscription.|
|Chapter 4, Implementing Azure Networks, … Within that chapter the reader will learn how networking in Azure works, how to plan Azure network components and how to deploy the different network components within Azure.|
|Chapter 5, Implementing and Securing Storage Accounts, … Within that chapter the reader will learn all about Azure Storage Management and his concepts (Blob / Table / Queue / File). The reader will also get some basic storage configurations.|
|Chapter 6, Planning and Deploying Virtual Machines in Azure, … Within that chapter the reader will learn the difference between the Azure Virtual Machine types, the common use cases for the different types and how to deploy Virtual Machines.|
|Chapter 7, Implementing Cloud Services, … Within that chapter the reader will learn all about Azure Cloud Services, the Cloud Service architecture, Azure Cloud Service vs. Azure App Services and how to create your first Cloud Service.|
|Chapter 8, Exploring and Implementing Containers, … Within that chapter the reader will learn the basic knowledge about the Azure Container Service area and how to create your first container service. The reader also learns the necessary steps for working with the service afterwards.|
|Chapter 9, Securing an Azure Environment, … Within that chapter the reader will learn all about Azure Security concepts (Identity Management with Azure AD / Role based Access Control / Azure Storage security) and the Azure Security Center.|
|Chapter 10, Best Practices, … Based on a common use case and migration scenario, the reader will get a basic overview how classic applications and services can be placed in the Microsoft Cloud ecosystem and which tools can be used for the migration.|
while working with lots of customers in different Azure projects I often here that they want to minimize and reduce their hardware on prem. or even bane every piece of server from their office locations.
In many cases that isn’t really possible. Mostly there are still applications which become very fuzzy with a latency above 30 ms between service and user.
To resolve that gap and reduce the systems on premises to a minimum and save as much money as possible, I started to place Windows Hyper-V Servers with Storage Spaces in the office were I needed lower latency.
At the end we are able to reduce the needed infrastructure to at least two servers, two switches and one router or firewall. I personally call those pieces of hardware “Cache Zone”. The picture below shows a schematic view.
With that I’m able to place services on prem. and cover them via redundancy in the cloud. Currently I have a list of a few basic services like Domain Controller, File Servers, Print Servers or internal Webservers. For the covering of File Servers you can find my post here.
So how does it look like, first you need to connect your office with your cloud provider, either via VPN, MPLS or with some services via HTTPS or other direct services via the internet. You place one partner for example a Domain Controller on prem. the other ones are placed in the cloud.
That’s nearly everything you need. If you use Windows Server 2016 Datacenter for the host, you have also all licenses you need for the features of the virtual machines like Storage Replication.
As server systems, I currently have some small systems von Secure Guard in my testlab.
If you have any questions, don’t hesitate to contact me.
this week I got a mail from Carsten Rachfahl the inventor and host of the Cloud & Datacenter Conference Germany. The CDC is one of the biggest IT Conferences in Germany and Carsten offered me a Speaker slot at his conference 🙂
I’m so proud that I match Carsten’s high quality standards and will be able to share some knowledge about Microsoft Azure. The topic I’m speaking about isn’t completely clear yet but I think it will be Microsoft Azure ExpressRoute and Azure Networking. 🙂
after having a great time at the MVP Summit and finishing chapter six of my first book. I wanted to turn some more effort into my blog again. The first thing I wanted to talk about is a scenario to expand your Windows Server 2016 Storage into the cloud and keep it available over branches.
The Scenario and use cases
Sometimes I have customers who need to have caches for fileservers and storage within their branch offices. In the past I needed expensive storage devices, complex software or I used DFS-R to transfer files.
With Windows Server 2016 we got Storage Replication, which gave me new opportunities to think about.
First Scenario I tried and built was with Windows Server 2016 based fileservers to replace DFS-R and establish asynchrony replication based on byte and not file level to reduce traffic etc.
You can use this kind of replication to move for file or backup data into the cloud.
What technologies did I use.
Windows Server 2016 Storage replication:
You can use Storage Replica to configure two servers to sync data so that each has an identical copy of the same volume. This topic provides some background of this server-to-server replication configuration, as well as how to set it up and manage the environment.
The Infrastructure Architecture
In the first place you need a fileserver as source and fileserver as target. You need also to ensure that the data you want to replicate are on a different volume than the data that stays onsite.
The source can either run on Hardware or which would be the most cost efficient way on Windows Server 2016 Hyper-V Cluster together with other virtual machines like Domain Controller, Backupserver, Webserver or Database. With this kind of cluster you would also save the license costs for the Fileserver Datacenter License because you can use the Host License with AVMA and you can leverage the Windows Server 2016 License Mobility to Azure. Which enables you to use your Windows Server License for Azure virtual machines.
The Azure virtual machine should be a DS3 or above, because you need at least two disks. If you want to replicate more disks, you should be able to add more disks.
From the Network Site you need to implement VPN Site to Site Connection between your offices and Azure. You need a performance gateway to get the necessary throughput and latency. I would recommend to use Microsoft ExpressRoute and MPLS.
Scenarios how to use those fileserver Volumes
The first scenario I tested so far, was to get a geo redundant standby system for a fileserver with profile data and shares. Both Servers do not run in a cluster (didn’t try that yet). Both servers are part of an DFS-N. The on premises server is the primary DFS-N target for the clients. The fileserver in Azure is the secondary target. The secondary fileserver is disabled as target for Clients in DFS-N. The access will be on the primary fileserver and the storage information will be replicated to secondary fileserver.
As long as everything went fine, you have only incoming traffic to Azure with no costs for traffic. If the primary volume or fileserver went offline you switch to the secondary fileserver by enabling the secondary fileserver in DFS-N and swapping the target volume to active. You can either do this manually or trigger it via automation services and monitoring e.g. Azure Automation and Operations Management Suite or System Center Operations Manager.
You can also use the fileserver as target for different fileservers.
A different approached could be achieved when using this scenario for backup. First you backup your data to the primary fileshare or volume and replicate it to the cloud.
After you finished the backup you switch the volume and transfer the backup to a cheaper location e.g. Azure Cold Storage Accounts.
The Pro’s and Con’s
|Easy to use||ExpressRoute needed for best performance|
|Azure License for Azure VM might be covered by your on premises license||Not documented yet and only Proof of concept|
|No need for expensive Storage Systems|
|Great to replicate File Data and Backups into the cloud|