Restoring 1000’s of SQL Server Databases Using PowerShell

Recently I have been working with a client who has a production SQL Server instance that has ~2.5k databases on it. They are in the process of moving to a new SQL Serve cluster and are reviewing their other non-production environments too.

One of their major pain points at the is their pre-production refresh process, this involves restoring all 2.5k databases to a pre-production SQL Server instance. The current process involves retrieving backups from Iron Mountain and performing file system restores. They are looking to streamline this process and automate it as much as possible. Most of the process could be achieved using T-SQL, but there are extra file system tasks that need to be performed which are more easily achieved using PowerShell. PowerShell 4.0 brought us the Restore-SQLDatabase cmdlet and so it was worth evaluating this to see if I could encapsulate the entire thing in one script. It’s still a work in progress but here are some snippets from it.

Firstly, I should probably explain that their backups are all stored in a single network share in this structure:

\\BackupRoot\Database\Full\backup.bak

The objective is to restore the most recent full backup of each production database to the pre-production server.

I’ve created some variables at the top of the script (real variable data removed):

$backupRoot = Get-ChildItem -Path "C:\Temp\Databases"
$datafilesDest = "C:\Temp\DataFiles"
$logfilesDest = "C:\Temp\LogFiles"
$server = "SIMON-XPS13\SQL2014" 

A description of the individual commands within the main body is included in the code below.

## For each folder in the backup root directory...
#
foreach($folder in $backupRoot)
{   
    # Get the most recent .bak files for all databases...
    $backupFiles = Get-ChildItem -Path $folder.FullName -Filter "*.bak" -Recurse | Sort-Object -Property CreationTime -Descending | Select-Object -First 1
    

    # For each .bak file...
    foreach ($backupFile in $backupFiles)
    {
        # Restore the header to get the database name...
        $query = "RESTORE HEADERONLY FROM DISK = N'"+$backupFile.FullName+"'"
        $headerInfo = Invoke-Sqlcmd -ServerInstance $server -Query $query
        $databaseName = $headerInfo.DatabaseName

        # Restore the file list to get the logical filenames of the database files...
        $query = "RESTORE FILELISTONLY FROM DISK = N'"+$backupFile.FullName+"'"
        $files = Invoke-Sqlcmd -ServerInstance $server -Query $query

        # Differentiate data files from log files...
        $dataFile = $files | Where-Object -Property Type -EQ "D"
        $logFile = $files | Where-Object -Property Type -EQ "L"

        # Set some variables...
        $dataFileName = $dataFile.LogicalName
        $logFileName = $logFile.LogicalName

        # Set the destination of the restored files...
        $dataFileFullPath = $datafilesDest+"\"+$dataFileName+".mdf"
        $logFileFullPath = $logfilesDest+"\"+$logFileName+".ldf"

        # Create some "Relocate" file objects to pass to the Restore-SqlDatabase cmdlet...
        $RelocateData = New-Object 'Microsoft.SqlServer.Management.Smo.RelocateFile, Microsoft.SqlServer.SmoExtended, Version=12.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91' -ArgumentList $dataFileName, $dataFileFullPath
        $RelocateLog = New-Object 'Microsoft.SqlServer.Management.Smo.RelocateFile, Microsoft.SqlServer.SmoExtended, Version=12.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91' -ArgumentList $logFileName, $logFileFullPath

        # Perform the database restore... and then go around the loop.
        Restore-SqlDatabase -ServerInstance $server -Database $databaseName -BackupFile $backupFile.FullName -RelocateFile @($RelocateData,$RelocateLog) -ReplaceDatabase
    }
} 

With this script I was able to restore all 2.5k databases with relative ease. The great thing is that this can be run in just a few clicks and is much less arduous than the previous process. This script is unlikely to suit your exact use case but hopefully you can take from it what you need and tailor it to your environment.

If you have any questions, then please don’t hesitate to contact me.

Choosing the right tool for the job

I was working with a client recently when an architect expressed disdain for a particular part of the SQL Server stack. In this particular case the architect had then chosen to roll-their-own solution which they felt was better. My intention is not to comment on their particular solution but it did set my mind thinking… do I let previous bad experiences cloud my judgment and adversely affect my solution proposals?

It might work, but is it the best way?

 

During my career I have worked with all manner of tools and products, I have worked with consultants and technical experts to work out solutions to complex problems. Now, in my role as a consultant it is important to capture a business’s requirements and map those onto technical requirements. By then designing a solution to meet the technical requirements it should follow that the business requirements are met. In the end, if I don’t meet the business requirements then project sponsors will see no value in the work I have done, no matter how hard I’ve worked, or how good my solution might be and the project will be deemed a failure.

 

One size does not fit all

If we consider a SQL Server related project, when trying to find solutions to technical requirements I always consider all of the options.

Something in here will do the job!

 

In a previous role I worked with transactional replication. Now, we had terrible trouble with our publication but that hasn’t stopped me from recommending it to clients where it has been the best solution to their particular requirement. For a slightly different requirement backup and restore might have been appropriate, always-on availability groups with a readable replica may have been a viable solution, using SSIS to copy data from A to B would have been another solution, but the best one for that client, at that time, was transactional replication. So even though I’d previously had some trouble with it, I was still able to recognise that it was the best tool for the job.

When it comes to technical solutions, one size most definitely does not fit all. There isn’t one way to provide High Availability (HA) or Disaster Recovery (DR) for a SQL Server database. There isn’t one way to build a data warehouse. There isn’t one way to design SSAS cubes or OLAP models. Indeed, if there was, my services as a consultant wouldn’t be in such demand. I can’t emphasise how important it is to step back and look at your problems without bias and pre-meditation. Only with a truly agnostic view can you design the best solution to solve a problem. Just because it didn’t work for you before doesn’t mean you shouldn’t consider it, and with regards to rolling your own solutions, just be sure you’re not re-inventing any wheels.

Creating an Availability Group with Microsoft Azure

Have you ever wanted to try out SQL Server Availability Groups but never had enough spare tin lying around in order to setup an environment? Well the Azure portal has been updated and now you don’t need to have any physical kit lying around, nor do you need to worry about having a beefed up laptop running Hyper-V!

All you need is an MSDN subscription that allows you an amount of Azure credit. Luckily I have a premium subscription which entitles me to £65 of Azure credit every month. Firstly you’ll need to login to your MSDN subscription and activate your Azure subscription and then you’re ready to go!

At the end of August the SQL Server team announced the release of SQL Server Always On Azure portal gallery item. Prior to this particular gallery item being available if you wanted to configure an Always On Availability Group (AG) you needed to create all the individual machines that comprise the infrastructure manually. This would include setting up the virtual network, configuring the domain and then configuring SQL Server. The addition of the gallery item greatly simplified the task of creating and automating an AG.

In this tutorial I will show you where to find this gallery item and how to use the Azure portal to configure a SQL Server 2014 Always On Availability Group server setup.

1)      Using a web browser navigate to http://portal.azure.com and log into your Azure subscription. Once you’re logged in you will be presented with a screen that looks similar to the below.

001 - Portal

 

 

2)      Click ADD in the bottom left hand corner, then click “Everything”

002 - Gallery1

 

3)      Click Data, storage, cache + backup and you should see an option for SQL Server 2014 AlwaysOn (you may also have seen this option available at other points in this process).

003 - Gallery2

 

4)      Once you have selected the SQL Server 2014 AlwaysOn option the screen below appears. Complete the fields with your chosen user names and passwords. The Resource Group is a name for a collection that makes it simple to see and manage related Azure resources.

004 - AGConfig001

 

5)      Once complete select the SQL Server Settings – Configure Settings option. Specify a host name prefix, an availability group name, and a name for the availability group listener. Clicking PRICING TIER will allow you to select from 3 pre-defined pricing options for your SQL Servers, there is also an option to browse all pricing tiers. I selected the 4 core 28GB variant.

005 - AGConfig002

 

6)      Ensure that you give the SQL Server Service account a descriptive name and appropriately strong password. Once finished click OK.

 

7)      Moving into the domain configuration, the first thing is to specify a hostname prefix (I’ve used the same prefix as in step 5). Give your domain a name and select the pricing tier for your domain controllers.

006 - AGConfig003

 

8)      Configuring a virtual network is easy. Specify a name for your virtual network and an IP address range.

007 - AGConfig004

 

9)      Storage accounts within Azure are containers for blob storage. All hard disks associated with Azure VMs are treated as blob storage and therefore you must specify a storage account to hold them. The account must have a unique name. If you have already created a storage account then you may use this to store your VMs. Select an appropriate pricing model.

008 - AGConfig005

 

10)   Once the storage is configured the only thing left to do is to choose the data centre where you want your VMs to be strored. I have chosen Northern Europe (Dublin). Click Create and you’re done!

009 - AGConfig006

 

The creation process takes about an hour to complete but can vary depending on the time of the day. Once the creation is complete you will be able to see all of the VMs that were created to form the solution by select Browse – Virtual Machines from the Auzure Portal landing page.

Now you can start adding databases to your availability group! I hope you’ve found this tutorial useful.

Azure is great for quickly creating a proof of concept, or sandbox environment that you can play about in. More importantly Azure has the capability to provide an enterprise scale platform once your POC has been proven. I would encourage you to create all your POC’s in Azure from today. Too many times I’ve worked for companies where great ideas have been shelved and forgotten because the procurement process for physical kit, or reliance on virtualisation teams was hampering the exploration ideas.

Cloud providers are dramatically changing the speed of delivery for IT. I for one am completely sold on “The Cloud” and the more I explore the Azure platform the more excited I am about my upcoming projects and how it can help!

SQL Server Data Files in Azure – Part 1 – Creating a database

One of the interesting new features of SQL Server 2014 was being able to place SQL Server database files directly in Azure blob containers. The purpose of this post is to walk you through setting up a database with its data and log files in Azure. In a later post I plan to test the performance of a database configured this way.

Why would you want to put data files in Azure?

Ease of migration – You could almost consider Azure blob storage to be a NAS in the cloud with unlimited storage potential. You can host your data and log files in Azure but your databases are on local SQL Server instances. When migrating databases between servers it would be as simple as issuing a detach and attach command and hey presto, you’re done.

Cost and storage benefits – Buying disks for on premises servers is relatively cheap these days when compared to the cost of SQL Server licensing. The cost of storing blobs in Azure can be found here: https://azure.microsoft.com/en-gb/pricing/details/storage/ but your first 1TB is (at the time of writing) is charged from as little as £0.0147 per GB which is hard to beat. You can store up to 500TB in each storage account, but one restriction to be aware of is that the maximum size of a page blob (the type we need) is 1TB. This means you may need to create more data files in your file groups if you have databases greater than 1TB in size.

Disaster recovery – Disaster recovery plans might be greatly simplified. Having a HA on premises solution to handle minor non-catastrophic outages is always something I advise. Having standby kit for the purposes of DR can be expensive, especially when you consider the likelihood you’ll ever need it might be slim. If you DR plan is to simply quickly stand-up a new server in an alternate location, you don’t want to have to worry about fetching off site backups or copying files over to the new location (if you can get them!). Simply re-connect your server to the Azure blobs!

Snapshot backup – Whilst I always advise taking traditional SQL Server backups, it is possible to use Azure snapshots to provide near instantaneous backups. For more information on that feature see here: https://msdn.microsoft.com/en-us/library/mt169363.aspx

How do we do it?

We’re going to be using PowerShell to do a lot of the configuration for us. I’ll be creating a new database but it is also possible to migrate databases you already have (I’ll include some script to help with that too).

I will be using the ARM (Azure Resource Manager) version of Azure and its associated PowerShell. If you’re using ASM (Azure Service Manager (Classic)) then you may need to alter the script a little.

So the steps we need to complete are:

  1. Create a resource group
  2. Create a storage account
  3. Create a container
  4. Create an access policy
  5. Create a shared access signature
  6. Create a SQL Server credential
  7. Finally, create our database

Use the PowerShell below to address steps 1 – 5. Comments in the code will help you to understand what is happening at each stage.

# Set these to your subscription name, storage account name, and a new container name
#
$SubscriptionName='SimonCoeo'
$resourceGroupName= 'simontestazureblob'
$StorageAccountName='simontestazureblob'
$location= 'North Europe'

# Set the Azure mode to ARM
#
Switch-AzureMode -Name AzureResourceManager
New-AzureResourceGroup -Name $resourceGroupName -Location $location
New-AzureStorageAccount -Type Standard_LRS -StorageAccountName $storageAccountName -ResourceGroupName $resourceGroupName -Location $location

# A new container name, must be all lowercase.
#
$ContainerName='simontestazureblob'

# Sets up the Azure Account, Subscription, and Storage context
#
Select-AzureSubscription -SubscriptionName $subscriptionName
$accountKeys = Get-AzureStorageAccountKey -StorageAccountName $storageAccountName -ResourceGroupName $resourceGroupName
$storageContext = New-AzureStorageContext -StorageAccountName $storageAccountName -StorageAccountKey $accountKeys.Key1

# Creates a new container
#
$container = New-AzureStorageContainer -Context $storageContext -Name $containerName
$cbc = $container.CloudBlobContainer

# Sets up a Stored Access Policy and a Shared Access Signature for the new container
#
$permissions = $cbc.GetPermissions();
$policyName = 'policy1'
$policy = new-object 'Microsoft.WindowsAzure.Storage.Blob.SharedAccessBlobPolicy'
$policy.SharedAccessStartTime = $(Get-Date).ToUniversalTime().AddMinutes(-5)
$policy.SharedAccessExpiryTime = $(Get-Date).ToUniversalTime().AddYears(10)
$policy.Permissions = "Read,Write,List,Delete"
$permissions.SharedAccessPolicies.Add($policyName, $policy)
$cbc.SetPermissions($permissions);

# Gets the Shared Access Signature for the policy
#
$policy = new-object 'Microsoft.WindowsAzure.Storage.Blob.SharedAccessBlobPolicy'
$sas = $cbc.GetSharedAccessSignature($policy, $policyName)
Write-Host 'Shared Access Signature= '$($sas.Substring(1))''

# Outputs the Transact SQL to create the Credential using the Shared Access Signature
#
Write-Host 'Credential T-SQL'
$TSQL = "CREATE CREDENTIAL [{0}] WITH IDENTITY='Shared Access Signature', SECRET='{1}'" -f $cbc.Uri,$sas.Substring(1)

Write-Host $TSQL

Once that has executed take the T-SQL Script that was generated as the output and run that on the instance of SQL Server where you want to create your database.

-- Output From PoSh
CREATE CREDENTIAL [https://simontestazureblob.blob.core.windows.net/simontestazureblob] 
WITH IDENTITY='Shared Access Signature', 
SECRET='Your SAS Key'

And for our final step, a relatively straightforward CREATE DATABASE command.

-- Create database with data and log files in our Windows Azure container.
CREATE DATABASE TestAzureDataFiles 
ON
( NAME = TestAzureDataFiles,
    FILENAME = 'https://simontestazureblob.blob.core.windows.net/simontestazureblob/TestAzureData.mdf' )
 LOG ON
( NAME = TestAzureDataFiles_log,
    FILENAME = 'https://simontestazureblob.blob.core.windows.net/simontestazureblob/TestAzureData_log.ldf')

AzureDataFiles1

Now that, that has been created we can look to compare the performance of our Azure hosted database against a locally stored database. Once I’ve got some test results to share I’ll post with Part 2.

The curious case of the slowly starting cluster

I came across an interesting case today that I thought it was worth doing a quick blog post about. I was working with a customer who was migrating onto their newly built SQL Server 2014 clustered instance. Part of the pre-migration testing was to fail the SQL Server service between the two nodes to ensure it started correctly. What we found was that SQL Server service wouldn’t accept incoming client connections, or appear “online” in failover cluster manager for around 75 seconds. Given the number of databases, virtual log files and resources I would have expected failover to happen much more quickly.

Reviewing the SQL Server error log we found that there were many scripts that were being executed “upgrading databases” before the message: “SQL Server is now ready for client connections. This is an informational message; no user action is required.” appeared. Upon investigation what we found was that one of the nodes was running SQL Server 2014 Service Pack 1, the other was running SQL Server 2014 RTM.

In order to identify this we ran the following query on the current active node:

SELECT
SERVERPROPERTY('ProductVersion') AS ProductVersion,
SERVERPROPERTY('ProductLevel') AS ProductLevel,
SERVERPROPERTY('Edition') AS Edition
GO

We then failed the service over onto the other node and ran the query again. The product level on the two nodes was different, one reporting RTM the other reporting SP1. Once the RTM node was upgraded to SP1 we were able to fail the SQL Service between the two nodes in under 15 seconds.

Always make sure that nodes in your SQL Server clusters are running at the same patch level!

Measuring Query Performance

This is a quick post about measuring query performance. How do you know that changes you’ve made to improve a query are actually beneficial? If all you’re doing is sitting there with a stop watch or watching the timer in the bottom right corner of SQL Server Management Studio then you’re doing it wrong!

Here’s a practical demonstration of how I go about measuring performance improvement. First let’s setup some demo data.

/* Create a Test database */
IF NOT EXISTS (SELECT * FROM sys.databases d WHERE d.name = 'Test')
CREATE DATABASE Test
GO

/* Switch to use the Test database */
USE test
GO

/* Create a table of first names */
DECLARE @FirstNames TABLE
(
 FirstName varchar(50)
)
INSERT INTO @FirstNames
SELECT 'Simon'
UNION ALL SELECT 'Dave'
UNION ALL SELECT 'Matt'
UNION ALL SELECT 'John'
UNION ALL SELECT 'James'
UNION ALL SELECT 'Alex'
UNION ALL SELECT 'Mark'

/* Create a table of last names */
DECLARE @LastNames TABLE
(
 LastName varchar(50)
)
INSERT INTO @LastNames
SELECT 'Smith'
UNION ALL SELECT 'Jones'
UNION ALL SELECT 'Davis'
UNION ALL SELECT 'Davies'
UNION ALL SELECT 'Roberts'
UNION ALL SELECT 'Bloggs'
UNION ALL SELECT 'Smyth'

/* Create a table to hold our test data */
IF OBJECT_ID('Test.dbo.Person') IS NOT NULL
DROP TABLE dbo.Person

/* Create a table of 5,764,801 people */
SELECT
 fn.FirstName,
 ln.LastName
INTO dbo.Person
FROM
 @FirstNames fn
 CROSS JOIN @LastNames ln
 CROSS JOIN @FirstNames fn2
 CROSS JOIN @LastNames ln2
 CROSS JOIN @FirstNames fn3
 CROSS JOIN @LastNames ln3
 CROSS JOIN @FirstNames fn4
 CROSS JOIN @LastNames ln4
GO

The features I use to track query performance are some advanced execution settings. These can be switched on using the GUI through Tools > Options > Query Execution > SQL Server > Advanced. The two we’re interested in are: SET STATISTICS TIME and SET STATISTICS IO.

These can be switched on using the following code:

/* Turn on the features that allow us to measure performance */
SET STATISTICS TIME ON
SET STATISTICS IO ON

Then in order to measure any improvement we make later it is important to benchmark the performance now.

/* Turn on the features that allow us to measure performance */
SET STATISTICS TIME ON
SET STATISTICS IO ON

/* Benchmark the current performance */
SELECT * FROM dbo.Person p WHERE p.FirstName = 'Simon'
GO

By checking the messages tab we can see how long the query ran for (elapsed time) how much CPU work was done (CPU Time) and how many reads were performed.

(823543 row(s) affected)
Table 'Person'. Scan count 5, logical reads 17729, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
SQL Server Execution Times:
 CPU time = 1623 ms, elapsed time = 4886 ms.

So… now to make a change that should yield an improvement.

/* Create an index to improve performance */
CREATE INDEX IX_Name ON dbo.Person(FirstName) INCLUDE (LastName)
GO

Now re-measure the performance…

/* Turn on the features that allow us to measure performance */
SET STATISTICS TIME ON
SET STATISTICS IO ON

/* Benchmark the current performance */
SELECT * FROM dbo.Person p WHERE p.FirstName = 'Simon'
GO
(823543 row(s) affected)
 Table 'Person'. Scan count 1, logical reads 3148, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
SQL Server Execution Times:
 CPU time = 249 ms, elapsed time = 4488 ms.

Looking at the two sets of measurements we can see that we’ve saved 398ms of elapsed time, we’ve saved 1,374ms of CPU time and we’re doing 14,581 less reads. This is a definite improvement.

The critical thing about this approach is that we’re able to scientifically measure the improvement and therefore relay this onto anyone that may be interested whether that’s a line manager or end customer.

Happy performance measuring!

Finding Poorly Performing Stored Procedures

Hi,

Following on from my script that looks at poorly performing queries, here’s one that helps to find procedures that may be performing poorly. It makes use of sys.dm_exec_procedure_stats and produces a result set that shows the following:

  • [Object]
  • [SQLText]
  • [QueryPlan]
  • [execution_count]
  • [Avg. CPU Time(s)]
  • [Avg. Physical Reads]
  • [Avg. Logical Reads]
  • [Avg. Logical Writes]
  • [Avg. Elapsed Time(s)]
  • [Total CPU Time(s)]
  • [Last CPU Time(s)]
  • [Min CPU Time(s)]
  • [Max CPU Time(s)]
  • [Total Physical Reads]
  • [Last Physical Reads]
  • [Min Physical Reads]
  • [Max Physical Reads]
  • [Total Logical Reads]
  • [Last Logical Reads]
  • [Min Logical Reads]
  • [Max Logical Reads],
  • [Total Logical Writes]
  • [Last Logical Writes]
  • [Min Logical Writes]
  • [Max Logical Writes]
  • [Total Elapsed Time(s)]
  • [Last Elapsed Time(s)]
  • [Min Elapsed Time(s)]
  • [Max Elapsed Time(s)]
  • cached_time
  • last_execution_time

I hope you find it useful.


SET NOCOUNT ON;

/*****************************************************
 Setup a temporary table to fetch the data we need from
 sys.dm_exec_procedure_stats.
 ******************************************************/
 IF OBJECT_ID('tempdb..#temp') IS NOT NULL DROP TABLE #temp
 CREATE TABLE #temp
 (
 database_id INT,
 object_id INT,
 [SQLText] VARCHAR(MAX),
 [QueryPlan] XML,
 [execution_count] bigint,
 [Avg. CPU Time(s)] DECIMAL(28,2),
 [Avg. Physical Reads] bigint,
 [Avg. Logical Reads] bigint,
 [Avg. Logical Writes] bigint,
 [Avg. Elapsed Time(s)] DECIMAL(28,2),
 [Total CPU Time(s)] DECIMAL(28,2),
 [Last CPU Time(s)] DECIMAL(28,2),
 [Min CPU Time(s)] DECIMAL(28,2),
 [Max CPU Time(s)] DECIMAL(28,2),
 [Total Physical Reads] bigint,
 [Last Physical Reads] bigint,
 [Min Physical Reads] bigint,
 [Max Physical Reads] bigint,
 [Total Logical Reads] bigint,
 [Last Logical Reads] bigint,
 [Min Logical Reads] bigint,
 [Max Logical Reads] bigint,
 [Total Logical Writes] bigint,
 [Last Logical Writes] bigint,
 [Min Logical Writes] bigint,
 [Max Logical Writes] bigint,
 [Total Elapsed Time(s)] DECIMAL(28,2),
 [Last Elapsed Time(s)] DECIMAL(28,2),
 [Min Elapsed Time(s)] DECIMAL(28,2),
 [Max Elapsed Time(s)] DECIMAL(28,2),
 cached_time DATETIME,
 last_execution_time DATETIME,
 [object] VARCHAR(MAX) NULL
 )

/*****************************************************
 Fetch the data we need from
 sys.dm_exec_procedure_stats.
 ******************************************************/
 insert into #temp
 select
 database_id,
 object_id,
 st.text,
 ep.query_plan,
 execution_count,

-- Averages
 CAST(total_worker_time * 1.0 / execution_count / 1000000.0 as DECIMAL(28,2)) AS [Avg. CPU Time(s)],
 total_physical_reads / execution_count AS [Avg Physical Reads],
 total_logical_reads / execution_count AS [Avg. Logical Reads],
 total_logical_writes / execution_count AS [Avg. Logical Writes],
 CAST(total_elapsed_time * 1.0 / execution_count / 1000000.0 as DECIMAL(28,2)) AS [Avg. Elapsed Time(s)],

-- CPU Details
 CAST(total_worker_time / 1000000.0 as DECIMAL(28,2)) AS [Total CPU Time(s)],
 CAST(last_worker_time / 1000000.0 as DECIMAL(28,2)) AS [Last CPU Time(s)],
 CAST(min_worker_time / 1000000.0 as DECIMAL(28,2)) AS [Min CPU Time(s)],
 CAST(max_worker_time / 1000000.0 as DECIMAL(28,2)) AS [Max CPU Time(s)],

-- Physical Reads
 total_physical_reads AS [Total Physical Reads],
 last_physical_reads AS [Last Physical Reads],
 min_physical_reads AS [Min Physical Reads],
 max_physical_reads AS [Max Physical Reads],

-- Logical Reads
 total_logical_reads AS [Total Logical Reads],
 last_logical_reads AS [Last Logical Reads],
 min_logical_reads AS [Min Logical Reads],
 max_logical_reads AS [Max Logical Reads],

-- Logical Writes
 total_logical_writes AS [Total Logical Writes],
 last_logical_writes AS [Last Logical Writes],
 min_logical_writes AS [Min Logical Writes],
 max_logical_writes AS [Max Logical Writes],

-- Elapsed Time
 CAST(total_elapsed_time / 1000000.0 as DECIMAL(28,2)) AS [Total Elapsed Time(s)],
 CAST(last_elapsed_time / 1000000.0 as DECIMAL(28,2)) AS [Last Elapsed Time(s)],
 CAST(min_elapsed_time / 1000000.0 as DECIMAL(28,2)) AS [Min Elapsed Time(s)],
 CAST(max_elapsed_time / 1000000.0 as DECIMAL(28,2)) AS [Max Elapsed Time(s)],

cached_time,
 last_execution_time,
 null
 from
 sys.dm_exec_procedure_stats
 CROSS APPLY sys.dm_exec_sql_text(sql_handle) st
 CROSS APPLY sys.dm_exec_query_plan(plan_handle) ep
 where
 database_id <> 32767

/*****************************************************
 This section of code is all about getting the object
 name from the dbid and the object id.
 ******************************************************/

-- Declare a Cursor
 DECLARE FetchObjectName CURSOR FOR
 SELECT
 object_id, database_id
 FROM
 #temp

-- Open the cursor
 OPEN FetchObjectName

-- Declare some vars to hold the data to pass into the cursor
 DECLARE @var1 INT,
 @var2 INT
 DECLARE @sql VARCHAR(MAX)
 DECLARE @object VARCHAR(MAX)

-- Create a temporary table to hold the result of the dynamic SQL
 IF OBJECT_ID('tempdb..#object') IS NOT NULL DROP TABLE #object
 CREATE TABLE #object
 (
 objectname VARCHAR(MAX)
 )

-- Loop through the 20 records from above and fetch the object names
 FETCH NEXT FROM FetchObjectName INTO @var1, @var2
 WHILE ( @@FETCH_STATUS <> -1 )
 BEGIN
 IF ( @@FETCH_STATUS <> -2 )

-- Set the SQL we need to execute
 SET @sql = 'USE [' + DB_NAME(@var2) + '];
 SELECT OBJECT_SCHEMA_NAME(' + CONVERT(VARCHAR(MAX),@var1) + ',' + CONVERT(VARCHAR(MAX),@var2) + ') + ''.'' + ' + 'OBJECT_NAME(' + CONVERT(VARCHAR(MAX),@var1) + ');'

-- Make sure the table is empty!
 TRUNCATE TABLE #object

-- Fetch the name of the object
 INSERT INTO #object
 EXEC(@sql)

-- Set the object name to the local var.
 SELECT @object = objectname FROM #object

-- Update the original results
 UPDATE #temp
 SET
 [Object] = RTRIM(LTRIM(@object))
 WHERE
 object_id = @var1
 and database_id = @var2

-- Go around the loop....
 FETCH NEXT FROM FetchObjectName INTO @var1, @var2
 END
 CLOSE FetchObjectName
 DEALLOCATE FetchObjectName
 /*****************************************************
 Output the final restults....
 ******************************************************/
 SELECT TOP 50
 db_name(database_id) + '.' + [Object] AS [Object],
 [SQLText],
 [QueryPlan],
 [execution_count],
 [Avg. CPU Time(s)],
 [Avg. Physical Reads],
 [Avg. Logical Reads],
 [Avg. Logical Writes],
 [Avg. Elapsed Time(s)],
 [Total CPU Time(s)],
 [Last CPU Time(s)],
 [Min CPU Time(s)],
 [Max CPU Time(s)],
 [Total Physical Reads],
 [Last Physical Reads],
 [Min Physical Reads],
 [Max Physical Reads],
 [Total Logical Reads],
 [Last Logical Reads],
 [Min Logical Reads],
 [Max Logical Reads],
 [Total Logical Writes],
 [Last Logical Writes],
 [Min Logical Writes],
 [Max Logical Writes],
 [Total Elapsed Time(s)],
 [Last Elapsed Time(s)],
 [Min Elapsed Time(s)],
 [Max Elapsed Time(s)],
 cached_time,
 last_execution_time
 FROM
 #temp
 ORDER BY
 [Avg. Logical Reads] DESC