SQL Centre of Excellence

BISM allows you to start either an PowerView or an Excel report direct from the model in a SharePoint Gallery. Pretty cool stuff and I really like how we have equal choice between which tool to use. You can start the report simply by clicking one of the linked report icons top right in a Gallery view (see below)

image

One initial problem we had is the Excel report does not work out of the box with BISM. The Excel 2010 client will not recognise the linked report and you will get a dialog box asking for the Data Link properties (see below)

image

The solution is to install the SQL 2012 Microsoft Analysis Services OLEDB Provider for Microsoft SQL Server 2012 which is part of the free feature pack below.

http://www.microsoft.com/download/en/details.aspx?id=29065

Thanks to Greg Galloway (fellow MVP) who pointed me in the right direction

http://www.artisconsulting.com/blogs/greggalloway/default.aspx

As you move your platform from running on a single database and single server to running on a highly available scenario, be that a Mirror, AlwaysOn, or a cloud based platform like Windows or SQL Azure, you will find out that highly available does not equal highly reliable! The service will still go down during failover and this will not be covered by your “up time SLA”.

The solution is unfortunately the need to re-code the application tier to detect these “transient” errors and retry the offending code – hoping that the service is now running again on another node.

I will explore three common scenarios that need fault retry logic:

  • SQL Mirror (SQL 2005+)
  • SQL Always On (SQL 2012)
  • Windows or SQL Azure

1) Dealing with Fault Retry Logic on a SQL Mirror

The sales blurb may claim that a SQL Mirror is zero down time with its “transparent” client redirection.  The truth is that during a failover you will loose your current connection, and a rollback of any uncommitted transactions will occur.

If you are happy that your application crashes for a little while (e.g. web users see say and error page), and that in progress transactions are lost, then this is ok. Otherwise you will need the application code to support Fault Retry Logic.

The good news is that there is some sample code blocks available on codeplex that can be used as a starting point if you are developing on dot.net

http://sqlfaultretryprovide.codeplex.com/

If you are not developing on dot.net, then this code is still useful as you can read from the code the typical error codes that you need to look out for on failover. You can see the error numbers along with a nice KB article below.

 

  // The relevant error codes which are documented in http://support.microsoft.com/kb/109787
        // which seem to be worthwhile for retrying.  If errors occur which are not in this list they 
        // are simply rethrown.
        readonly static int[] RetryErrorNumbers = new int[] 
        { 
             1608,  // A network error was encountered while sending results to the front end
            10008,  // Bad token from SQL Server: datastream processing out of sync
            10010,  // Read from SQL Server failed
            10018,  // Error closing network connection
            10025,  // Write to SQL Server failed
            10053,  // Software caused connection abort
            10054,  // Connection reset by peer
            10058,  // Can't send after socket shutdown
            17824,  // Unable to write to server-side connection
            17825,  // Unable to close server-side connection
            17832,  // Unable to read login packet(s)            
        };

The sample code block uses a delegate to ensure facilitate the following algorithm:

a) Try method 
b) If it throws and exception, check to see if the exception number is one of the known errors associated with mirror failover (see above)
c) If it is a known error message then go to sleep and retry
d) each retry attempt takes exponentially longer until the max retries are reached

 

   1:  /// <summary>
   2:          /// This method invokes the supplied delegate with retry logic wrapped around it.  No values are returned.  If the delegate raises 
   3:          /// recoverable SQL server or client errors, then the supplied delegate is reinvoked after a certain amount of delay
   4:          /// (which grows exponentially for each retry) until the retry limit is exceeded, at which point the exception
   5:          /// is rethrown.  Other exceptions are not caught and will be visible to callers.
   6:          /// </summary>
   7:          /// <param name="body">The delegate to invoke within the retry code.</param>
   8:          internal void WithMirrorRetry (RetryBody body)
   9:          {
  10:              int retryCount = 0;
  11:   
  12:              while (true)
  13:              {
  14:                  try
  15:                  {
  16:                      body();
  17:                      return;
  18:                  }
  19:                  catch (SqlException se)
  20:                  {
  21:                      if (!RetryErrorNumbers.Contains<int>(se.Number))
  22:                          throw;
  23:                      else
  24:                          if (retryCount >= _maxRetries) throw;
  25:                  }
  26:                  //Sleep exponentially more each time through the retry loop
  27:                  Thread.Sleep((int)Math.Pow(_delayBase, retryCount));
  28:                  retryCount += 1;
  29:              }
  30:          }

 

2) Dealing with Fault Retry Logic for SQL 2012 Always On

In theory SQL 2012 is similar needs a similar approach to that of the Mirror Fault Retry Provider (above). However there is now an expanded list of potential error numbers with new messages specific to Always On. A quick search on sys.messages shows some 306 new messages that mention replicas or availability groups

image

I haven’t managed to find a definitive list of which ones are relevant for detecting failover, but so far in my testing I am only hitting error codes 976 and 983 on top of the standard mirror ones listed in http://support.microsoft.com/kb/109787

Hopefully there will be an official KB article or code block from the patterns and practices group to help narrow down the list of error codes to use.

 

3) Dealing with Transient Errors in Windows Azure

Just to make matters more complex, environments like Windows Azure also have “transient” errors relating to either failover, changed connections or just the nature of the cloud.

You can read about the Transient Fault Handling Block and download it from this link

A Word of Warning on Application Transactions

If your application is using Microsoft DTC for transactions, or explicitly starting database transactions outside of a stored procedure, then the failover will cause the entire transaction to roll back. Your retry logic needs to do a lot more work than just retry the last statement after re-opening the connection. It will need to retry all statements, including the start of the transactions.

The sample code on transient errors, or fault retry will not help you here. You will need to ensure that retry happens for the whole batch of statements within the application. This seems to be a very common practice with Java development, so make sure the project includes a review of this risk.

SSAS Maestro Reading List

by Bob Duffy 14. April 2012 08:08

Vidas Matelis made up a more extensive reading list here. My top five reading list. Read these and also if you can bring them as reference material on eBook format.

The Maestro has no doubt changed a bit since I went in June 2011, but I would recommend making sure you are very familiar with all the references below:

1. SQL Server 2008 Analysis Services Unleashed

If you read only one book before doing the Maestro – this is it. Be warned it is a huge tomb of book. Myself and Michael Barrett (who also certified as a Maestro), both carried this around in eBook form on the iPad and constantly referred to it, especially WRT memory internals such as how the cleaner works. The exam was open book, so keep the books handy!

http://www.amazon.com/Microsoft-Server-Analysis-Services-Unleashed/dp/0672330016

2. Analysis Services 2008 Performance Guide

You must read this until you actually get the concept of MaxThreads, CoordinatorExecutionMode and all the other joyful settings for Analysis Services.

http://www.microsoft.com/download/en/details.aspx?displaylang=en&id=17303

3. Analysis Services 2008 R2 Operations Guide

Similar to the Performance Guide, a must read BEFORE attending the Maestro course.

http://msdn.microsoft.com/en-us/library/hh226085.aspx

4. Many to Many Revolution 2.0 and Optimisation Techniques

http://www.sqlbi.com/articles/many2many/

http://www.microsoft.com/download/en/details.aspx?displaylang=en&id=137

5. Expert Cube Development with Microsoft SQL Server 2008 Analysis Services

http://www.packtpub.com/expert-cube-development-with-microsoft-sql-server-2008-analysis-services/book

Chris Webb, Alberto Ferrari and Marco Russo is to Analysis Services what the Kimball Lifecycle books were to dimensional data warehousing: Something most consultants carry around in their bag when on a customer site.

Just passed the SSAS Maestro !!

by Bob Duffy 11. April 2012 04:30

Just got an email from Denny Leein SQLCAT saying that i have just been certified as an Analysis Services Maestro.

Super happy as it was a great opportunity to learn from some of the best minds in Analysis Services in Redmond. In terms of benefits, apart from some kudos -  I now get to harass the other Maestros globally with obscure questions. This is tremendously valuable, having a peer group to bounce ideas of, or just verify that you are doing the right thing is great.  Especially when you are working in a small country without a large pool of peers. I chatted about this with Peter Harris recently with regard to the MCA and MCM programs, but until now had no one to chat with about Analysis Services.

http://borntolearn.mslearn.net/btl/b/weblog/archive/2012/03/26/sql-saturday-105-live-in-dublin-ireland.aspx

You can read about the Maestro program here:

I can really recommend this level 500 program to anyone who is heavily into Analysis Services. Marco Russo and Alberto Ferrari are now running some of the Maestro training courses outside of Redmond.

http://sqlblog.com/blogs/marco_russo/archive/2012/03/20/ssas-maestro-training-in-july-2012-ssasmaestro-ssas.aspx

When I first heard about the Maestro programme from my friend Dirk Gubbels (who passed one of the early rotations), I was like a kid waiting for Christmas to come. Getting to spend a week to focus on going deeper into a product that you work with is so worthwhile. I could have worked on projects for 10 years and not acquired as much knowledge on Internals and VLDB best practices as spending a day on the Maestro program.

So what’s next on certifications ? well just in time for the 2012 stack to be released. Arghhhhh

Paradise Lost? Advise Found

by Carmel 3. April 2012 03:28

Help me please my data’s fried

My manager is fuming

My users just cannot get in

Its very time-consuming

 

Resources have all gone to pot

My memory has amnesia

The processor is eaten up

And the network is freezing!

 

Our data feeds are going slow

And inserts do not happen

Our dashboards are not up to date

Yeah - Someone’s for a slappin’

 

Infrastructure took a look,

And bought some new equipment

They assured me things would go

At lightspeed, in an instant

 

We got a team and hauled it in

New servers, SAN and Rack

We moved the database and app

Improvement? - not jack.

 

Infrastructure bought some more

Disks, solid state and all

And memory, upgraded to the max

Things better? Not at all

 

Ah now they said, we have instead

Bought Fusion IO

They  plugged it in, it was a SIN

Performance gain? NO

 

They asked their devs to take a look

They said it was the SQL

And that it was above their heads

So then they called us people

 

In we went to take a look

Your database size?, we asked

There was a pause, a big deep breath

A GIGABYTE they gasped

 

Quite curiously, I gazed

Upon their stored procedure

It was quite long, I will admit

I nearly had a seizure!

 

With UDF’s, temp tables

And cursors abounding

Trigger-Happy to be sure

It really was astounding!

 

I took a look at their IO

T’was something like the matrix

Covering Indexes? Not a one

The scanning was CRAZY!

 

Non-sargable, crossed numa nodes

Blown cache, both plan and dough

A total mess – is this a test?

It really can’t be so!

 

Using Dynamic SQL doesn't always perform

And no param's is not the norm!

So what to do that is the question?

Maybe they'd take a small suggestion?

When your data fits on a memory stick

More hardware's unlikely to do the trick!!

 

 

PC Install Sweepstake

by Bob Duffy 2. April 2012 07:50

Ok, come back from SqlBits, fired up my trusty PC to do some work and BOOM. The SSD drive which is my system drive has died. Who was it who said SSD’s were now quite reliable!

Anyway the sweepstake bet is on for how long the re-install of all applications and recovery of documents will take (we don’t use imaging, so this is old school setup). I started the re-install with a fresh SSD drive at 10:00am

So far bets are:

  • Carmel: 18:00pm
  • Fintan: 18:30
  • Esteban: 17:30
  • Emer: 16:00

What apps do I have on my desktop – something like:

  • More Dell drivers than you can shake a stick at (arggh, network driver isn’t native to windows)
  • Office 2010
  • Visual Studio 2008
  • Visual Studio 2010
  • SQL Server 2005/2008R2/2012 (Client/DB/SSAS/SSRS)
  • Hyper-V remote admin tools
  • Power Pivot
  • Camtasia
  • Some 10GB of drop box and live mesh content
  • Some 70 windows updates and a lorra service packs
  • Lots of misc. utilities like dropbox, live mesh , live writer, winrar

 

And the winner of the Sweepstake was……

Emer wins with a final finish time of 16:07

Page List

Page List