SQL Centre of Excellence

There are lots of good blogs on how to optimise the performance of a tabular model by looking at where space is going to. The alternative approach we will discuss here is to look at the trace file to determine where time has been spent on processing. This approach is very valuable for the MOLAP model, so can we do the same for the tabular model.

http://blogs.prodata.ie/post/Tool-for-baselining-MOLAP-Processing.aspx

The diagram below shows events that can be viewed in SQL profiler when you examine the “Progress Report End” event during model processing.

image

By importing the trace file into a SQL table and then building a PowerPivot Excel workbook on top we can see where time has been spent.

On my blown up adventure works example which takes about 20 minutes to process I can see that time quite evenly spent on the two main fact tables which are about 100 million rows a piece.

image

I can also see which columns took the most time to compress and break this down by table.

image

 

Limitations with Tabular Trace Files

Unfortunately we don’t get as much value out of the trace file as we do with the MOLAP trace file. There are number of issues/bugs:

1. The ExecuteSQL complete message is recorded in the trace file as soon as the first row is returned which makes it not very useful in estimating how long a query took to run. You would need to use a DBEngine profiler to examine efficiency of the underlying queries. I hope that the product team can fix this as it would be really cool to be able to determine how long the query took to run by looking at the trace file.

2.   A lot of the events are “blocking”, so total duration is pretty much the same for the different types of event (ReadData, vertipaq, Process). As a result we can see where time was spent at a high level but not what it was spent on.

 

A Better Approach

While the trace file approach is “ok” for baselining processing time, you are probably better using Kasper De Jonges space tool to determine which tables and columns need optimising – this will naturally reduce processing time

http://www.powerpivotblog.nl/what-is-using-all-that-memory-on-my-analysis-server-instance/

image

 

Getting Started

If you want to play around with the PowerPivot model that reads in the trace file, my sample excel workbook is below

https://skydrive.live.com/redir?page=view&resid=A24CC88336E6B2AB!16777&authkey=!AG8Kzei2taPJtCw

instructions

1. Capture a trace file with at least the “Progress Report End” event while processing a tabular model.

2. Import the model into a SQL Server Data Table

3. Update the PowerPivot connection to point to the data table

4. Refresh the PowerPivot Model.

blog comments powered by Disqus

Page List

Page List