SSAS Cube Processing Performance Troubleshooting – A platform perspective

Analysing Index Usage

The first step I use in order to troubleshoot cube processing performance issues is index usage.

SELECT DB_NAME(S.database_id) AS [Database],
 OBJECT_NAME(S.[OBJECT_ID]) AS [Table],
 I.name AS [Index],
 user_seeks,
 user_scans,
 user_lookups,
 user_updates
FROM sys.dm_db_index_usage_stats AS s
 JOIN sys.indexes AS i
 ON i.object_id = s.object_id
 AND I.index_id = s.index_id
WHERE
 OBJECTPROPERTY(s.object_id,'IsUserTable') = 1
 AND s.database_id = DB_ID()

Look at any recently added indexes. Are the indexes being used? Seeks, or scans? Is there an index that has a large no. of user lookups?

Where there is an index that is not being used consider whether it is required. Having too many indexes on a table can have an adverse effect on performance. When rows are inserted, deleted or updated in a table the index needs to be maintained. Maintenance has an overhead. The above query has an output column called user_updates. This is the no. of times an index has been changed as a result of data changes within the table.

Find Missing Indexes

Use the following query to identify missing indexes.

SELECT TOP 25
 DB_NAME(dm_mid.database_id) AS DatabaseID,
 dm_migs.avg_user_impact*(dm_migs.user_seeks+dm_migs.user_scans) Avg_Estimated_Impact,
 OBJECT_NAME(dm_mid.OBJECT_ID,dm_mid.database_id) AS [TableName],
 'CREATE INDEX [IX_' + OBJECT_NAME(dm_mid.OBJECT_ID,dm_mid.database_id) + '_'
 + REPLACE(REPLACE(REPLACE(ISNULL(dm_mid.equality_columns,''),', ','_'),'[',''),']','') +
 CASE
 WHEN dm_mid.equality_columns IS NOT NULL AND dm_mid.inequality_columns IS NOT NULL THEN '_'
 ELSE ''
 END
 + REPLACE(REPLACE(REPLACE(ISNULL(dm_mid.inequality_columns,''),', ','_'),'[',''),']','')
 + ']'
 + ' ON ' + dm_mid.statement
 + ' (' + ISNULL (dm_mid.equality_columns,'')
 + CASE WHEN dm_mid.equality_columns IS NOT NULL AND dm_mid.inequality_columns IS NOT NULL THEN ',' ELSE
 '' END
 + ISNULL (dm_mid.inequality_columns, '')
 + ')'
 + ISNULL (' INCLUDE (' + dm_mid.included_columns + ')', '') AS Create_Statement
FROM
 sys.dm_db_missing_index_groups dm_mig
 INNER JOIN sys.dm_db_missing_index_group_stats dm_migs ON dm_migs.group_handle = dm_mig.index_group_handle
 INNER JOIN sys.dm_db_missing_index_details dm_mid ON dm_mig.index_handle = dm_mid.index_handle
WHERE
 dm_mid.database_ID = DB_ID()
ORDER BY
 Avg_Estimated_Impact DESC

Be aware that the DMV’s will highlight several indexes for the same table. Be sure not to add multiple indexes that may be duplicates or subsets of other indexes. The create index statement is provided in the output.

Finding Query Plans in the Cache

The following query will help you identify long running queries in the plan cache:

SELECT TOP 20
 DB_NAME(qt.dbid) AS [Database],
 [Query] = SUBSTRING (qt.text,(qs.statement_start_offset/2) + 1,
 ((CASE WHEN qs.statement_end_offset = -1
 THEN LEN(CONVERT(NVARCHAR(MAX), qt.text)) * 2
 ELSE qs.statement_end_offset
 END - qs.statement_start_offset)/2) + 1),
 [TotalRunTime (s)] = CAST(qs.total_elapsed_time/ 1000000.0 AS DECIMAL(28,2)),
 [TotalTimeWaiting (s)] = CAST((qs.total_elapsed_time - qs.total_worker_time) / 1000000.0 AS DECIMAL(28,2)),
 [%TimeRunning] = CAST(qs.total_worker_time * 100.0 / qs.total_elapsed_time AS DECIMAL(28,2)),
 [%TimeWaiting] = CAST((qs.total_elapsed_time - qs.total_worker_time)* 100.0 / qs.total_elapsed_time AS DECIMAL(28, 2)),
 [ExecutionCount] = qs.execution_count,
 [AverageRunTime] = CAST(qs.total_elapsed_time / 1000000.0 / qs.execution_count AS DECIMAL(28, 2)),
 [AverageTimeWaiting (s)] = CAST((qs.total_elapsed_time - qs.total_worker_time) / 1000000.0 / qs.execution_count AS DECIMAL(28, 2)),
 [QueryPlan] = qp.query_plan
FROM
 sys.dm_exec_query_stats qs
 CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) as qt
 CROSS APPLY sys.dm_exec_query_plan(qs.plan_handle) qp
WHERE
 qs.total_elapsed_time > 0
 AND DB_NAME(qt.dbid) IS NOT NULL
 AND qt.text LIKE '%<INSERT SEARCH STRING HERE>%'
ORDER BY
 [AverageRunTime] DESC

Using this query, currently ordered by average run time, will allow you to identify queries that take a long time to run. The queries with the highest average run time are likely to be the SSAS cube partition processing queries. The query above will allow you to look at the query plans associated with the query. Look for expensive scans and key lookups.

Alter the where clause of the above query if you are looking for queries that use specific views or tables.

Add indexes to support long running queries.

Eliminating Key Lookups

keylookup

Eliminate key lookups by altering existing indexes or add new covering indexes.

To identify columns that need adding to existing indexes look at the output columns associated with the keylookup plan operator. A screenshot of this is shown above. Keylookups are an expensive operation that needs to be performed for every row in a result set. By adding these keys to the existing index as included columns SQL Server can read the data it needs straight from the index pages.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s