Restoring 1000’s of SQL Server Databases Using PowerShell

Recently I have been working with a client who has a production SQL Server instance that has ~2.5k databases on it. They are in the process of moving to a new SQL Serve cluster and are reviewing their other non-production environments too.

One of their major pain points at the is their pre-production refresh process, this involves restoring all 2.5k databases to a pre-production SQL Server instance. The current process involves retrieving backups from Iron Mountain and performing file system restores. They are looking to streamline this process and automate it as much as possible. Most of the process could be achieved using T-SQL, but there are extra file system tasks that need to be performed which are more easily achieved using PowerShell. PowerShell 4.0 brought us the Restore-SQLDatabase cmdlet and so it was worth evaluating this to see if I could encapsulate the entire thing in one script. It’s still a work in progress but here are some snippets from it.

Firstly, I should probably explain that their backups are all stored in a single network share in this structure:

\\BackupRoot\Database\Full\backup.bak

The objective is to restore the most recent full backup of each production database to the pre-production server.

I’ve created some variables at the top of the script (real variable data removed):

$backupRoot = Get-ChildItem -Path "C:\Temp\Databases"
$datafilesDest = "C:\Temp\DataFiles"
$logfilesDest = "C:\Temp\LogFiles"
$server = "SIMON-XPS13\SQL2014" 

A description of the individual commands within the main body is included in the code below.

## For each folder in the backup root directory...
#
foreach($folder in $backupRoot)
{   
    # Get the most recent .bak files for all databases...
    $backupFiles = Get-ChildItem -Path $folder.FullName -Filter "*.bak" -Recurse | Sort-Object -Property CreationTime -Descending | Select-Object -First 1
    

    # For each .bak file...
    foreach ($backupFile in $backupFiles)
    {
        # Restore the header to get the database name...
        $query = "RESTORE HEADERONLY FROM DISK = N'"+$backupFile.FullName+"'"
        $headerInfo = Invoke-Sqlcmd -ServerInstance $server -Query $query
        $databaseName = $headerInfo.DatabaseName

        # Restore the file list to get the logical filenames of the database files...
        $query = "RESTORE FILELISTONLY FROM DISK = N'"+$backupFile.FullName+"'"
        $files = Invoke-Sqlcmd -ServerInstance $server -Query $query

        # Differentiate data files from log files...
        $dataFile = $files | Where-Object -Property Type -EQ "D"
        $logFile = $files | Where-Object -Property Type -EQ "L"

        # Set some variables...
        $dataFileName = $dataFile.LogicalName
        $logFileName = $logFile.LogicalName

        # Set the destination of the restored files...
        $dataFileFullPath = $datafilesDest+"\"+$dataFileName+".mdf"
        $logFileFullPath = $logfilesDest+"\"+$logFileName+".ldf"

        # Create some "Relocate" file objects to pass to the Restore-SqlDatabase cmdlet...
        $RelocateData = New-Object 'Microsoft.SqlServer.Management.Smo.RelocateFile, Microsoft.SqlServer.SmoExtended, Version=12.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91' -ArgumentList $dataFileName, $dataFileFullPath
        $RelocateLog = New-Object 'Microsoft.SqlServer.Management.Smo.RelocateFile, Microsoft.SqlServer.SmoExtended, Version=12.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91' -ArgumentList $logFileName, $logFileFullPath

        # Perform the database restore... and then go around the loop.
        Restore-SqlDatabase -ServerInstance $server -Database $databaseName -BackupFile $backupFile.FullName -RelocateFile @($RelocateData,$RelocateLog) -ReplaceDatabase
    }
} 

With this script I was able to restore all 2.5k databases with relative ease. The great thing is that this can be run in just a few clicks and is much less arduous than the previous process. This script is unlikely to suit your exact use case but hopefully you can take from it what you need and tailor it to your environment.

If you have any questions, then please don’t hesitate to contact me.

Creating an Availability Group with Microsoft Azure

Have you ever wanted to try out SQL Server Availability Groups but never had enough spare tin lying around in order to setup an environment? Well the Azure portal has been updated and now you don’t need to have any physical kit lying around, nor do you need to worry about having a beefed up laptop running Hyper-V!

All you need is an MSDN subscription that allows you an amount of Azure credit. Luckily I have a premium subscription which entitles me to £65 of Azure credit every month. Firstly you’ll need to login to your MSDN subscription and activate your Azure subscription and then you’re ready to go!

At the end of August the SQL Server team announced the release of SQL Server Always On Azure portal gallery item. Prior to this particular gallery item being available if you wanted to configure an Always On Availability Group (AG) you needed to create all the individual machines that comprise the infrastructure manually. This would include setting up the virtual network, configuring the domain and then configuring SQL Server. The addition of the gallery item greatly simplified the task of creating and automating an AG.

In this tutorial I will show you where to find this gallery item and how to use the Azure portal to configure a SQL Server 2014 Always On Availability Group server setup.

1)      Using a web browser navigate to http://portal.azure.com and log into your Azure subscription. Once you’re logged in you will be presented with a screen that looks similar to the below.

001 - Portal

 

 

2)      Click ADD in the bottom left hand corner, then click “Everything”

002 - Gallery1

 

3)      Click Data, storage, cache + backup and you should see an option for SQL Server 2014 AlwaysOn (you may also have seen this option available at other points in this process).

003 - Gallery2

 

4)      Once you have selected the SQL Server 2014 AlwaysOn option the screen below appears. Complete the fields with your chosen user names and passwords. The Resource Group is a name for a collection that makes it simple to see and manage related Azure resources.

004 - AGConfig001

 

5)      Once complete select the SQL Server Settings – Configure Settings option. Specify a host name prefix, an availability group name, and a name for the availability group listener. Clicking PRICING TIER will allow you to select from 3 pre-defined pricing options for your SQL Servers, there is also an option to browse all pricing tiers. I selected the 4 core 28GB variant.

005 - AGConfig002

 

6)      Ensure that you give the SQL Server Service account a descriptive name and appropriately strong password. Once finished click OK.

 

7)      Moving into the domain configuration, the first thing is to specify a hostname prefix (I’ve used the same prefix as in step 5). Give your domain a name and select the pricing tier for your domain controllers.

006 - AGConfig003

 

8)      Configuring a virtual network is easy. Specify a name for your virtual network and an IP address range.

007 - AGConfig004

 

9)      Storage accounts within Azure are containers for blob storage. All hard disks associated with Azure VMs are treated as blob storage and therefore you must specify a storage account to hold them. The account must have a unique name. If you have already created a storage account then you may use this to store your VMs. Select an appropriate pricing model.

008 - AGConfig005

 

10)   Once the storage is configured the only thing left to do is to choose the data centre where you want your VMs to be strored. I have chosen Northern Europe (Dublin). Click Create and you’re done!

009 - AGConfig006

 

The creation process takes about an hour to complete but can vary depending on the time of the day. Once the creation is complete you will be able to see all of the VMs that were created to form the solution by select Browse – Virtual Machines from the Auzure Portal landing page.

Now you can start adding databases to your availability group! I hope you’ve found this tutorial useful.

Azure is great for quickly creating a proof of concept, or sandbox environment that you can play about in. More importantly Azure has the capability to provide an enterprise scale platform once your POC has been proven. I would encourage you to create all your POC’s in Azure from today. Too many times I’ve worked for companies where great ideas have been shelved and forgotten because the procurement process for physical kit, or reliance on virtualisation teams was hampering the exploration ideas.

Cloud providers are dramatically changing the speed of delivery for IT. I for one am completely sold on “The Cloud” and the more I explore the Azure platform the more excited I am about my upcoming projects and how it can help!

SQL Server Data Files in Azure – Part 1 – Creating a database

One of the interesting new features of SQL Server 2014 was being able to place SQL Server database files directly in Azure blob containers. The purpose of this post is to walk you through setting up a database with its data and log files in Azure. In a later post I plan to test the performance of a database configured this way.

Why would you want to put data files in Azure?

Ease of migration – You could almost consider Azure blob storage to be a NAS in the cloud with unlimited storage potential. You can host your data and log files in Azure but your databases are on local SQL Server instances. When migrating databases between servers it would be as simple as issuing a detach and attach command and hey presto, you’re done.

Cost and storage benefits – Buying disks for on premises servers is relatively cheap these days when compared to the cost of SQL Server licensing. The cost of storing blobs in Azure can be found here: https://azure.microsoft.com/en-gb/pricing/details/storage/ but your first 1TB is (at the time of writing) is charged from as little as £0.0147 per GB which is hard to beat. You can store up to 500TB in each storage account, but one restriction to be aware of is that the maximum size of a page blob (the type we need) is 1TB. This means you may need to create more data files in your file groups if you have databases greater than 1TB in size.

Disaster recovery – Disaster recovery plans might be greatly simplified. Having a HA on premises solution to handle minor non-catastrophic outages is always something I advise. Having standby kit for the purposes of DR can be expensive, especially when you consider the likelihood you’ll ever need it might be slim. If you DR plan is to simply quickly stand-up a new server in an alternate location, you don’t want to have to worry about fetching off site backups or copying files over to the new location (if you can get them!). Simply re-connect your server to the Azure blobs!

Snapshot backup – Whilst I always advise taking traditional SQL Server backups, it is possible to use Azure snapshots to provide near instantaneous backups. For more information on that feature see here: https://msdn.microsoft.com/en-us/library/mt169363.aspx

How do we do it?

We’re going to be using PowerShell to do a lot of the configuration for us. I’ll be creating a new database but it is also possible to migrate databases you already have (I’ll include some script to help with that too).

I will be using the ARM (Azure Resource Manager) version of Azure and its associated PowerShell. If you’re using ASM (Azure Service Manager (Classic)) then you may need to alter the script a little.

So the steps we need to complete are:

  1. Create a resource group
  2. Create a storage account
  3. Create a container
  4. Create an access policy
  5. Create a shared access signature
  6. Create a SQL Server credential
  7. Finally, create our database

Use the PowerShell below to address steps 1 – 5. Comments in the code will help you to understand what is happening at each stage.

# Set these to your subscription name, storage account name, and a new container name
#
$SubscriptionName='SimonCoeo'
$resourceGroupName= 'simontestazureblob'
$StorageAccountName='simontestazureblob'
$location= 'North Europe'

# Set the Azure mode to ARM
#
Switch-AzureMode -Name AzureResourceManager
New-AzureResourceGroup -Name $resourceGroupName -Location $location
New-AzureStorageAccount -Type Standard_LRS -StorageAccountName $storageAccountName -ResourceGroupName $resourceGroupName -Location $location

# A new container name, must be all lowercase.
#
$ContainerName='simontestazureblob'

# Sets up the Azure Account, Subscription, and Storage context
#
Select-AzureSubscription -SubscriptionName $subscriptionName
$accountKeys = Get-AzureStorageAccountKey -StorageAccountName $storageAccountName -ResourceGroupName $resourceGroupName
$storageContext = New-AzureStorageContext -StorageAccountName $storageAccountName -StorageAccountKey $accountKeys.Key1

# Creates a new container
#
$container = New-AzureStorageContainer -Context $storageContext -Name $containerName
$cbc = $container.CloudBlobContainer

# Sets up a Stored Access Policy and a Shared Access Signature for the new container
#
$permissions = $cbc.GetPermissions();
$policyName = 'policy1'
$policy = new-object 'Microsoft.WindowsAzure.Storage.Blob.SharedAccessBlobPolicy'
$policy.SharedAccessStartTime = $(Get-Date).ToUniversalTime().AddMinutes(-5)
$policy.SharedAccessExpiryTime = $(Get-Date).ToUniversalTime().AddYears(10)
$policy.Permissions = "Read,Write,List,Delete"
$permissions.SharedAccessPolicies.Add($policyName, $policy)
$cbc.SetPermissions($permissions);

# Gets the Shared Access Signature for the policy
#
$policy = new-object 'Microsoft.WindowsAzure.Storage.Blob.SharedAccessBlobPolicy'
$sas = $cbc.GetSharedAccessSignature($policy, $policyName)
Write-Host 'Shared Access Signature= '$($sas.Substring(1))''

# Outputs the Transact SQL to create the Credential using the Shared Access Signature
#
Write-Host 'Credential T-SQL'
$TSQL = "CREATE CREDENTIAL [{0}] WITH IDENTITY='Shared Access Signature', SECRET='{1}'" -f $cbc.Uri,$sas.Substring(1)

Write-Host $TSQL

Once that has executed take the T-SQL Script that was generated as the output and run that on the instance of SQL Server where you want to create your database.

-- Output From PoSh
CREATE CREDENTIAL [https://simontestazureblob.blob.core.windows.net/simontestazureblob] 
WITH IDENTITY='Shared Access Signature', 
SECRET='Your SAS Key'

And for our final step, a relatively straightforward CREATE DATABASE command.

-- Create database with data and log files in our Windows Azure container.
CREATE DATABASE TestAzureDataFiles 
ON
( NAME = TestAzureDataFiles,
    FILENAME = 'https://simontestazureblob.blob.core.windows.net/simontestazureblob/TestAzureData.mdf' )
 LOG ON
( NAME = TestAzureDataFiles_log,
    FILENAME = 'https://simontestazureblob.blob.core.windows.net/simontestazureblob/TestAzureData_log.ldf')

AzureDataFiles1

Now that, that has been created we can look to compare the performance of our Azure hosted database against a locally stored database. Once I’ve got some test results to share I’ll post with Part 2.