Creating an Availability Group with Microsoft Azure

Have you ever wanted to try out SQL Server Availability Groups but never had enough spare tin lying around in order to setup an environment? Well the Azure portal has been updated and now you don’t need to have any physical kit lying around, nor do you need to worry about having a beefed up laptop running Hyper-V!

All you need is an MSDN subscription that allows you an amount of Azure credit. Luckily I have a premium subscription which entitles me to £65 of Azure credit every month. Firstly you’ll need to login to your MSDN subscription and activate your Azure subscription and then you’re ready to go!

At the end of August the SQL Server team announced the release of SQL Server Always On Azure portal gallery item. Prior to this particular gallery item being available if you wanted to configure an Always On Availability Group (AG) you needed to create all the individual machines that comprise the infrastructure manually. This would include setting up the virtual network, configuring the domain and then configuring SQL Server. The addition of the gallery item greatly simplified the task of creating and automating an AG.

In this tutorial I will show you where to find this gallery item and how to use the Azure portal to configure a SQL Server 2014 Always On Availability Group server setup.

1)      Using a web browser navigate to http://portal.azure.com and log into your Azure subscription. Once you’re logged in you will be presented with a screen that looks similar to the below.

001 - Portal

 

 

2)      Click ADD in the bottom left hand corner, then click “Everything”

002 - Gallery1

 

3)      Click Data, storage, cache + backup and you should see an option for SQL Server 2014 AlwaysOn (you may also have seen this option available at other points in this process).

003 - Gallery2

 

4)      Once you have selected the SQL Server 2014 AlwaysOn option the screen below appears. Complete the fields with your chosen user names and passwords. The Resource Group is a name for a collection that makes it simple to see and manage related Azure resources.

004 - AGConfig001

 

5)      Once complete select the SQL Server Settings – Configure Settings option. Specify a host name prefix, an availability group name, and a name for the availability group listener. Clicking PRICING TIER will allow you to select from 3 pre-defined pricing options for your SQL Servers, there is also an option to browse all pricing tiers. I selected the 4 core 28GB variant.

005 - AGConfig002

 

6)      Ensure that you give the SQL Server Service account a descriptive name and appropriately strong password. Once finished click OK.

 

7)      Moving into the domain configuration, the first thing is to specify a hostname prefix (I’ve used the same prefix as in step 5). Give your domain a name and select the pricing tier for your domain controllers.

006 - AGConfig003

 

8)      Configuring a virtual network is easy. Specify a name for your virtual network and an IP address range.

007 - AGConfig004

 

9)      Storage accounts within Azure are containers for blob storage. All hard disks associated with Azure VMs are treated as blob storage and therefore you must specify a storage account to hold them. The account must have a unique name. If you have already created a storage account then you may use this to store your VMs. Select an appropriate pricing model.

008 - AGConfig005

 

10)   Once the storage is configured the only thing left to do is to choose the data centre where you want your VMs to be strored. I have chosen Northern Europe (Dublin). Click Create and you’re done!

009 - AGConfig006

 

The creation process takes about an hour to complete but can vary depending on the time of the day. Once the creation is complete you will be able to see all of the VMs that were created to form the solution by select Browse – Virtual Machines from the Auzure Portal landing page.

Now you can start adding databases to your availability group! I hope you’ve found this tutorial useful.

Azure is great for quickly creating a proof of concept, or sandbox environment that you can play about in. More importantly Azure has the capability to provide an enterprise scale platform once your POC has been proven. I would encourage you to create all your POC’s in Azure from today. Too many times I’ve worked for companies where great ideas have been shelved and forgotten because the procurement process for physical kit, or reliance on virtualisation teams was hampering the exploration ideas.

Cloud providers are dramatically changing the speed of delivery for IT. I for one am completely sold on “The Cloud” and the more I explore the Azure platform the more excited I am about my upcoming projects and how it can help!

SQL Server Data Files in Azure – Part 1 – Creating a database

One of the interesting new features of SQL Server 2014 was being able to place SQL Server database files directly in Azure blob containers. The purpose of this post is to walk you through setting up a database with its data and log files in Azure. In a later post I plan to test the performance of a database configured this way.

Why would you want to put data files in Azure?

Ease of migration – You could almost consider Azure blob storage to be a NAS in the cloud with unlimited storage potential. You can host your data and log files in Azure but your databases are on local SQL Server instances. When migrating databases between servers it would be as simple as issuing a detach and attach command and hey presto, you’re done.

Cost and storage benefits – Buying disks for on premises servers is relatively cheap these days when compared to the cost of SQL Server licensing. The cost of storing blobs in Azure can be found here: https://azure.microsoft.com/en-gb/pricing/details/storage/ but your first 1TB is (at the time of writing) is charged from as little as £0.0147 per GB which is hard to beat. You can store up to 500TB in each storage account, but one restriction to be aware of is that the maximum size of a page blob (the type we need) is 1TB. This means you may need to create more data files in your file groups if you have databases greater than 1TB in size.

Disaster recovery – Disaster recovery plans might be greatly simplified. Having a HA on premises solution to handle minor non-catastrophic outages is always something I advise. Having standby kit for the purposes of DR can be expensive, especially when you consider the likelihood you’ll ever need it might be slim. If you DR plan is to simply quickly stand-up a new server in an alternate location, you don’t want to have to worry about fetching off site backups or copying files over to the new location (if you can get them!). Simply re-connect your server to the Azure blobs!

Snapshot backup – Whilst I always advise taking traditional SQL Server backups, it is possible to use Azure snapshots to provide near instantaneous backups. For more information on that feature see here: https://msdn.microsoft.com/en-us/library/mt169363.aspx

How do we do it?

We’re going to be using PowerShell to do a lot of the configuration for us. I’ll be creating a new database but it is also possible to migrate databases you already have (I’ll include some script to help with that too).

I will be using the ARM (Azure Resource Manager) version of Azure and its associated PowerShell. If you’re using ASM (Azure Service Manager (Classic)) then you may need to alter the script a little.

So the steps we need to complete are:

  1. Create a resource group
  2. Create a storage account
  3. Create a container
  4. Create an access policy
  5. Create a shared access signature
  6. Create a SQL Server credential
  7. Finally, create our database

Use the PowerShell below to address steps 1 – 5. Comments in the code will help you to understand what is happening at each stage.

# Set these to your subscription name, storage account name, and a new container name
#
$SubscriptionName='SimonCoeo'
$resourceGroupName= 'simontestazureblob'
$StorageAccountName='simontestazureblob'
$location= 'North Europe'

# Set the Azure mode to ARM
#
Switch-AzureMode -Name AzureResourceManager
New-AzureResourceGroup -Name $resourceGroupName -Location $location
New-AzureStorageAccount -Type Standard_LRS -StorageAccountName $storageAccountName -ResourceGroupName $resourceGroupName -Location $location

# A new container name, must be all lowercase.
#
$ContainerName='simontestazureblob'

# Sets up the Azure Account, Subscription, and Storage context
#
Select-AzureSubscription -SubscriptionName $subscriptionName
$accountKeys = Get-AzureStorageAccountKey -StorageAccountName $storageAccountName -ResourceGroupName $resourceGroupName
$storageContext = New-AzureStorageContext -StorageAccountName $storageAccountName -StorageAccountKey $accountKeys.Key1

# Creates a new container
#
$container = New-AzureStorageContainer -Context $storageContext -Name $containerName
$cbc = $container.CloudBlobContainer

# Sets up a Stored Access Policy and a Shared Access Signature for the new container
#
$permissions = $cbc.GetPermissions();
$policyName = 'policy1'
$policy = new-object 'Microsoft.WindowsAzure.Storage.Blob.SharedAccessBlobPolicy'
$policy.SharedAccessStartTime = $(Get-Date).ToUniversalTime().AddMinutes(-5)
$policy.SharedAccessExpiryTime = $(Get-Date).ToUniversalTime().AddYears(10)
$policy.Permissions = "Read,Write,List,Delete"
$permissions.SharedAccessPolicies.Add($policyName, $policy)
$cbc.SetPermissions($permissions);

# Gets the Shared Access Signature for the policy
#
$policy = new-object 'Microsoft.WindowsAzure.Storage.Blob.SharedAccessBlobPolicy'
$sas = $cbc.GetSharedAccessSignature($policy, $policyName)
Write-Host 'Shared Access Signature= '$($sas.Substring(1))''

# Outputs the Transact SQL to create the Credential using the Shared Access Signature
#
Write-Host 'Credential T-SQL'
$TSQL = "CREATE CREDENTIAL [{0}] WITH IDENTITY='Shared Access Signature', SECRET='{1}'" -f $cbc.Uri,$sas.Substring(1)

Write-Host $TSQL

Once that has executed take the T-SQL Script that was generated as the output and run that on the instance of SQL Server where you want to create your database.

-- Output From PoSh
CREATE CREDENTIAL [https://simontestazureblob.blob.core.windows.net/simontestazureblob] 
WITH IDENTITY='Shared Access Signature', 
SECRET='Your SAS Key'

And for our final step, a relatively straightforward CREATE DATABASE command.

-- Create database with data and log files in our Windows Azure container.
CREATE DATABASE TestAzureDataFiles 
ON
( NAME = TestAzureDataFiles,
    FILENAME = 'https://simontestazureblob.blob.core.windows.net/simontestazureblob/TestAzureData.mdf' )
 LOG ON
( NAME = TestAzureDataFiles_log,
    FILENAME = 'https://simontestazureblob.blob.core.windows.net/simontestazureblob/TestAzureData_log.ldf')

AzureDataFiles1

Now that, that has been created we can look to compare the performance of our Azure hosted database against a locally stored database. Once I’ve got some test results to share I’ll post with Part 2.

The curious case of the slowly starting cluster

I came across an interesting case today that I thought it was worth doing a quick blog post about. I was working with a customer who was migrating onto their newly built SQL Server 2014 clustered instance. Part of the pre-migration testing was to fail the SQL Server service between the two nodes to ensure it started correctly. What we found was that SQL Server service wouldn’t accept incoming client connections, or appear “online” in failover cluster manager for around 75 seconds. Given the number of databases, virtual log files and resources I would have expected failover to happen much more quickly.

Reviewing the SQL Server error log we found that there were many scripts that were being executed “upgrading databases” before the message: “SQL Server is now ready for client connections. This is an informational message; no user action is required.” appeared. Upon investigation what we found was that one of the nodes was running SQL Server 2014 Service Pack 1, the other was running SQL Server 2014 RTM.

In order to identify this we ran the following query on the current active node:

SELECT
SERVERPROPERTY('ProductVersion') AS ProductVersion,
SERVERPROPERTY('ProductLevel') AS ProductLevel,
SERVERPROPERTY('Edition') AS Edition
GO

We then failed the service over onto the other node and ran the query again. The product level on the two nodes was different, one reporting RTM the other reporting SP1. Once the RTM node was upgraded to SP1 we were able to fail the SQL Service between the two nodes in under 15 seconds.

Always make sure that nodes in your SQL Server clusters are running at the same patch level!