SQL Centre of Excellence

With a “ragged” hierarchy we often need a way to make sure that the hierarchy is displayed in a user friendly fashion when measures can be linked to any level (including parents).

There are two established solutions

1) The Pure DAX approach

Alberto was one of the first to publish a DAX workaround for the problem of ragged hierarchies here:


This workaround starts to get very complex when you have multiple hierarchies and performance can suffer on large hierarchies as the evaluation is at run time, but it does work and is “supported”.

2) HideMemberIf

BIDS helper exposes an “unsupported” technique to add HideMemberIf into the XMLA of the model definition. https://bidshelper.codeplex.com/ 

This solution looks more appealing as a lot of the work is done during ProcessRecalc when it does Hierarchy Processing – hopefully giving benefit at query time. It also involves less calculated measures, columns and code which sounds nice.

However two words of warning:

a) There is no guarantee that this will work in later editions

b) There are stability issues with a large number of levels. On the model we are working the ProcessRecalc and visual studio will just hang when we try to set this property on a NINE level hierarchy.


In the end we did use HideMemberIf, but only set the property on specific levels which we know can have measures attached to them to avoid the stability issues of having this turned on for all levels.

I’ve been diving deep into how long Analysis Services can take to open a connection for a few months now and thanks to some tips from many people (Alberto Ferrari, Greg Galloway, Darren Gospel to name a few) here are some ideas to improve performance.

1. Make sure your network is not slow

OK, sounds kinda obvious, but this was killing me for months.

Most of me problems ended up being driver related with various incompatibilities between our NICs, Hyper-V and some of the new features.

A simple internet speed test or benchmarking tool would have shown that the VMs running SSAS were running very slow with packet loss. I blogged more details here


2. Avoid Named Instances

It can take up to double the time to open a connection to a named instance. As a test I opened 1,000 connections to a remote server with five scenarios:

  • By Name (Default Instance)
  • By IP (Default Instance)
  • By Server Name and Instance Name (Named Instance)
  • By Name and Port number (Named Instance)
  • By IP and Port number (Named Instance)


The results show that if we are using a named instance (e.g. bob-pc\tabular) then connections can be almost double the time to make. Once we manually specify the port number and bypass the “browser” service, connection time is about the same for the other test cases.

There was a very small improvement for using the IP instead of the name, but this was so small I’m not sure it would be worth the effort (2.5%)

lesson learned: Use fixed port numbers when using SSAS instances. If you do need named instances and want to simplify connections it should be possible to give each instance  the same default port number (2383) and bind a different IP address to each instance. We've been using this trick on SQL DBEngine clusters for years to simplify connections and manageability.

2. Optimising Local Connections

If the client and SSAS are running on the same server (e.g. development laptop). You can optimise the connection even further by using the IP address and/or avoiding using the alias “localhost”


Ok these are small numbers, but its a 45% gain to be made on connection time Smile

one eye opener or me is there was a measurable difference to show that using “localhost” was not the best option for me.

3. IP4 or IP6 ?

I ran a quick test comparing use of IP4 to IP6 for connecting to an SSAS Server. This did show a 30% improvement with the older IP4 (I have no idea why this would be so!)



4. Do I care about connection time ?

Many tools like Excel either keep a connection open to SSAS or re-use a single connection, so connection time might not be that important.

Other tools may open a connection per request or in some cases (reporting services) many many connections per user – In this case the impact of faster connections may be more noticeable.

Over the last few months we have been developing reporting services reports on top of a SSAS tabular model. One of the banes of my life has been the unusable performance of our reports, especially if they open a lot of connections.

This week I finally got some time to triage the issue and found the root cause, so I’ll post it here in case anyone else is hit by this.

How Can we measure Connection Time on Reports ?

Reporting services 2012 onwards has additional performance data on datasets on the ExecutionLog3 view  in the AdditionalInfo XML column. In my case it looked something like this:


As you can see it is taking 420ms to open a single connection to the Analysis Server This should be around 20ms or less. One of the annoying this about reporting services is it will open a fresh connection for every dataset, so with more complex reports this became unusable very quickly with some seemingly simple reports taking 30 seconds plus.

We used and xPath query and another report to visualise this performance data per data


which facilitates a report showing time spent between connection and actual query time



What was the cause

After trying it on various servers, laptops, desktops we managed to isolate it down to any of our SSAS Servers running inside Hyper-V on Windows 2012R2. Some further triage isolated the issue down to VMs which were using a broadcom network card with a known issue with Hyper-V and the NIC



After some reconfiguration, the connections came down to under 20ms Smile

Page List

Page List