SQL Centre of Excellence

You can’t start learning about DAX and Tabular models within SSAS without coming across Marco Russo and Alberto Ferrari who run SQLBI in Italy. They co –authored with Chris Webb the leading book on Tabular Model Design.

The good news is we have partnered with SQLBI to bring two courses to Dublin with SQLBI.

On the 7th-8th April a 2 Day Tabular Workshop

http://www.prodata.ie/events/2014/SSAS_TabularWorkshop.aspx

and

On the 4th-6th June a 3 day Advanced DAX Workshop

http://www.prodata.ie/events/2014/Advanced_DAX_Workshop.aspx

I’m super excited about these. DAX is a must know language for anyone working with PowerPivot or the Tabular model, and who better to learn it from than the worlds best experts here in Dublin!

Hope to see you there.

Often we have advised customers wanting performance out the their Analysis Services engines, be that MOLAP, Tabular or PowerPivot, to avoid the public cloud offerings by Microsoft and in some cases even avoid virtualisation on premise.

What - Why shouldn’t we virtualise Analysis Services?

The issue is not so much the virtualisation products like Azure, Hyper-V or VMware. All  these products have no problem running Analysis Services virtualised with little overhead compare to a physical server. The days of consultants and naysayers saying the “virtualisation” is the bottleneck are largely gone or moved to extreme scale cases.

The real problem stems from the choice of hardware for the host server. This is a gross over simplification, but basically we can buy two different types of servers:

1. Very fast PC/servers with a small number of cores and excellent single threaded and memory performance. Think your one socket almost 4Ghz beast you play games on with SSDs and 1,800+ FSB. Maybe something like Aarons Bertrand's new toy http://www.sqlperformance.com/2014/01/system-configuration/justifying-the-new-mac-pro
mac_pro_a

2. Servers with high capacity, lots of cores and much slower CPU and memory performance. Think your four NUMA node, 1.9 Ghz  80 core virtualisation host with 800 FSB ram speed and ahem “Power Saving” features to slow the CPU down some more!

Guess which type of server people typically buy for virtualisation projects and guess which type suits the CPU and memory intensive workloads of Analysis Services?

I discuss hardware selection for Analysis Services more in this blog here:

http://blogs.prodata.ie/post/Selecting-Hardware-for-Analysis-Services-(10GB-1TB-size).aspx

Some of my customers have fixed this problem by deploying “high compute” pods within their VMware infrastructure specially suited to the single threaded sensitivity of Analysis Services, but sadly the Microsoft Azure IaaS offerings have very much historically been “one size” fits clock from a compute performance perspective.(ram and core counts do vary).

Just to be clear there is nothing stopping you from virtualising SSAS workloads right now and I’m sure some people have and are quite happy with the “adequate” performance they get. However performance is often an implicitly stated requirement and customers may not be happy when the “production” solution runs slower than say my 1,000 euro notebook or my old desktop.

So what is Changing, Enter “Big Compute”

After initially tackling the broader workload of web applications and OLTP, Microsoft is now starting to look deeply at analytical workloads in the cloud, both with its platform offerings and also by starting to provide VM servers aimed at high compute workloads.

http://msdn.microsoft.com/library/windowsazure/dn594431.aspx

What does “Big Compute” actually mean. Well something like this:

  • Intel Xeon E5-2670 2.6 Ghz CPU
  • 16 cores
  • 112GB DDR3-1600 Mhz RAM
  • 40 Gbps back end connectivity (wow!)

Some limitations

  • This is only available in EU West (Amsterdam).
  • This is only available for PaaS, it is not available as a IaaS image for you to install SQL Server.

Obviously I am hoping that these limitations lift eventually so we can put Tabular models and SSAS in the cloud without the embarrassing massive drop in performance when compared to our laptops..

Call to action – ask Microsoft to:

  • Offer “small compute” images. I want an 4/8 core VM with 1,600 FSB and 2.8+ Ghz CPU with 64-128GB ram.
  • Offer the “Big Compute” images for IaaS and customers with analytics workloads on SSAS.  Big compute is not just for HPC nerds guys!!

This is just a short guide or Q & A to help speakers whop are submitting for SQL Saturday.

http://www.sqlsaturday.com/310/eventhome.aspx

Often we have had many more speakers and topics suggested than we can accommodate, so we have to select based on the number of tracks and what we think will appeal to the Irish technical community at the time,

How are Sessions and Speakers Selected?

We plan to use a minimum of five people from the Irish SQL Community to vote on speakers and topics. This is used to narrow the list. Once the list is narrowed down we review the list to make sure that we have evenly covered the tracks as best we can. 

  • Some consideration is often given to speakers who are popular in Ireland as they have spoken before and scored in the top quartile at previous events. The community will recognise there names and want to hear them again.
  • We also have an allowance for speakers who “nearly” made it into a previous event but had to be bumped due to numbers to to balance the agenda. In this case we will try and rotate so that they say get accepted every other year and do not always “just miss”.
  • Where possible we will give local speakers a fair chance, as its a SQL Saturday/PASS remit to grow local speaking talent in the community. In Ireland there are only a few local speakers anyway, so often we ask really new speakers to consider lightning talks or suggest they do a local UG first. SQL Saturday Dublin may not be the right event for your first public speaking experience, as its a large and technically excellent audience which can be intimidating.
  • As its a Microsoft technology event we love having Microsoft FT and product team speakers over.
  • We have to consider the tracks. In previous years for example we may have only a few developer focused sessions. So if you submitted a session called “The Entity Framework Deep Dive for developers” it doesn't matter who you are – you would have a high probability  of being selected.

What Level Should I pitch the talk at

Dublin has a “lot” of senior data professionals (5+ years experience). With this in mind I would suggest:

  • If you are presenting on technology that most people know the basics then people may expect an intermediate or advanced content. For example TSQL, indexing, mirroring, replication, indexing. Focus on deep dives, troubleshooting and expect people to know the basics.
  • If you are presenting on an emerging technology. For example: Extended Events, Azure, Sql 2014 features. PDW, hadoop. Then a lower level may be appropriate. Saying that “Extended Events Deep Dive” may be a crowd puller too.
  • The session entitled “beginners guide to installing SQL Server” is not going to ever be selected, but “automating SQL deployments with sysprep and PowerShell” or “Deep Dive into troubleshooting installation”  may be.

What Tracks are you running

We accept topics covering a few areas:

  • DBA topics such as monitoring, performance tuning, backup, data movement, high availability, features of the database engine, internals, query plans, Extended Events, virtualisation, consolidation , storage.
  • Developer topics. Either data modelling, DBEngine development (TSQL, XML, CLR) or data related application development topics such as data caching, ADO.Net, Entity Framework, messaging,  or design patterns for data centric applications and the cloud.
  • Business Intelligence. Subjects across Power BI, MOLAP, Tabular, Emerging cloud BI capabilities, SharePoint, data warehousing, SSIS, Big Data, reporting and data visualisation.
  • The Cloud: Any subject related to managing and consuming data in the cloud
  • Special Interest: Personal Development, Certification, Soft Skills, Career growth.

What Topics are Hot this year ?

  • As its a SQL 2014 launch this is a major focus. Almost everyone is new to SQL 2014 and we want to help people skill up at least to understand capability and features
  • Some core topics we will always need to cover: performance troubleshooting, high availability and DR, TSQL best and worst practices, indexing deep dives.
  • In previous years we have been short of developer topics: The Entity Framework, emerging data access tools, data unit testing, design patterns, data caching frameworks, data modelling.

Things to Avoid

  • If you have submitted sessions to #sqlsatdublin in a previous year and they were not selected. I wouldn't recommend just blanket re-submitting them. It really isn’t a lottery and if they were not accepted previously I would say they have zero chance of being selected.
  • If you have already spoken at a topic in a previous SQL Saturday Dublin the it may not be worth re-submitting the same session.
  • If you have been using the same topic at every other SQL Saturday across Europe its ok to submit it, but  like a movie we may have seen it before. This can be a good thing if we liked it and we think no once else in Ireland has seen it, but it can be a bad thing if we thing a lot of people may have seen it..
  • Funny topics are good only IF your topic is funny. For example Rob Volks “How to loose friends with Unicode”. However in most cases its better to try and use the title to help explain the content. For example we changed one of our session titles last year from “IO IO its off to work we go” to “File and File group Design patterns for optimising storage IO”
  • SQL 2012 is not the new kid on the block any more. So try and avoid any “Introduction” to SQL2012 topics. We are expecting talks aimed at SQL 2012 to be deeper now, unless they are aimed at a very obscure features.
  • If you are presenting on a SQL 2012 feature which is improved in SQL 2014 we will be favouring sessions that take this into account. Column Store and Availability Groups are key examples here.
  • If you are reading this, and decide to choose a topic based what you think the audience wants to see, MAKE SURE you know the topic first. We do bother to check out a speakers fit for a topic by reading blogs, etc. We are aiming to have an awesome event and make sure that both speakers and attendees get the most from the experience – we don’t want anyone to be eaten alive because they have copied slides from books online and are struggling with filling in the blanks..

 

So Why Speak at SQL Saturday Dublin ?

Here’s a blog on why William Durkin started speaking at SQL Saturdays

http://sqlsouthwest.co.uk/2014/02/09/community-interview-whats-it-like-to-speak-at-a-sql-saturday/

For speakers coming to SQL Saturday we promise you:

  • An interactive and interested audience. There are hundreds of SQL professionals in Ireland who love to learn more about SQL Server.
  • An exclusive speaker event on Friday and unique gift for coming to Dublin. Previous years have includes a party on the floating barge and night out at the famous Guinness brewery.
  • Speaker T-shirt for your collection.
  • copy of your feedback from the audience and ranking (if you opt in), this may help you gain experience and credibility for speaking at other events.
  • Grow your reputation as a technical speaker in Europe.
  • Take part in the SQL Saturday activities and after event.
  • Our undying gratitude – you are awesome.

 

Hope to see you there,

Bob

Know any SQL Server experts that are going for a technical job interview ?

Have some fun by giving them 60 seconds to answer as many as possible of these true/false questions.

# Question True False
1 A clustered table is better than a heap?    
2 You should never use a GUID as a primary key ?
3 Mirroring is better than replication?
4 The datetime2 data type should always be used over datetime?
5 Page compression is recommended over row compression?
6 Compressed tables are faster than uncompressed tables?
7 Adding SSD’s will make queries faster?
8 Adding more memory will make queries faster?
9 Cursors should never be used?
10 Table variables are better than temp tables?
bonus The Tabular model is faster than the MOLAP model?    

Thanks to Vedran Kesegic (SQL MCM) who popped over for dinner last night and inspired these interview questions.

In my session on load testing analysis services we cover a few areas:

  • How to create unit tests for Analysis Services using a variety of tools
  • How to capture and analyse a workload using a service side trace
  • How to use Visual Studio run load test scenarios are regression testing in excel.
  • A case study on load testing Analysis Services to support thousands of users.
  • A case study on converting an enterprise BI solution from MOLAP to tabular and testing it.

If you want to play with the deck, source code and demos I used you can download them below.

If you want to recreate the MOLAP model I used with 100 million rows in the sales table there is a previous blog on how to scale adventure works below:

I’ll post another blog entry later with my 100 million row tabular model with the same schema as the adventure works. This is useful as you can run almost the exact same MDX on both MOLAP and tabular.

Deck and load test source download Link: ssas_load_test_deck_source.zip

Update (06/07/2015). As a few people have pointed out this is more of a named instance issue than a “tabular” specific issue. I added some details on how to optimise SSAS connections here

http://blogs.prodata.ie/post/Optimising-Connections-to-SSAS-(MOLAPTabular).aspx

http://blogs.prodata.ie/post/Solved-slow-SSAS-connections-on-Hyper-V.aspx

 

-- original post ------------------------------------------------------------------------------------------------------

Seasoned developers are often shocked when they find out the SSAS doesn't have any support for connection pooling.

With SSAS MOLAP this didn't matter so much as connections are much faster to acquire than say with the DBEngine and is considered trivial, so we don’t pool connections and neither does the product.

As a quick test I wrote a console application to open 1,000 connections on both the molap and tabular engine and counted the time taken.

image

In this test the tabular engine was some 236% slower at opening connections, on a test to a remote server it was still about 2x slower.

This will really hurt workloads like reporting services which opens a lot of connections for small parameter queries, and so far this has really hampered the tabular model in competing against MOLAP models in load testing. Even if the tabular queries are efficient the additional work to establish a connection is dragging down throughput at load.

If you are writing applications that have high concurrency tabular connections it certainly raises the question on if you should manually pool the connections rather than rely on analysis services.

if you are using say reporting services it also raises the importance of considering the usage of cached data sets for re-usable parameter queries and slowly changing data.

Here is the source code if you want to repeat the test.

Imports Microsoft.AnalysisServices.AdomdClient
Module Module1
 
    Sub Main()
        Dim t1 As Integer = System.Environment.TickCount
        Dim t2 As Integer
        Dim t3 As Integer
 
        Console.WriteLine("Testing Speed of 1000 Connections to MOLAP")
        For x = 1 To 1000
            Using con As New AdomdConnection("Data Source=localhost;Catalog=AdventureWorksDW")
                con.Open()
                con.Close()
            End Using
        Next
        t2 = System.Environment.TickCount
 
        Console.WriteLine(String.Format("Time Take={0} seconds", (t2 - t1) / 1000))
 
        Console.WriteLine("Testing Speed of 1000 Connections to Tabular")
        For x = 1 To 1000
            Using con As New AdomdConnection("Data Source=localhost\tabular;Catalog=AdventureWorksDW")
                con.Open()
                con.Close()
            End Using
        Next
        t3 = System.Environment.TickCount
        Console.WriteLine(String.Format("Time Take={0} seconds", (t3 - t2) / 1000))
 
        Console.WriteLine("Press any Key")
        Console.ReadKey()
    End Sub
 
End Module

There are generally three reasons for partitioning an Analysis Services model:

  1. To improve processing time by running partitioning in parallel
  2. To improve processing time by allowing for incremental processing of some sort
  3. To improve query time where most queries only need a smaller slice of data – say a week or month. 

To help test this I created an Adventure Works tabular model with about 100 million rows in the two sales tables and ran some tests on both partitioned and partitioned models. I also created an equivalent MOLAP model and ran the same tests.

1. Partitioning to Improve Processing Performance

The chart below shows total processing time for out 100 million row AdventureWorks model on an 8 core laptop with an SSD drive.

When we partitioned the models the MOLAP model dropped by 31% to around 12 minutes. the Tabular model actually took 5% longer, going from 20:54 to 21:56.

image

Why did the Tabular modal take longer to process when we partitioned it. Well a few reasons come to mind:

  • The Tabular model does not support parallel processing of partitions.
  • The underlying data warehouse was not partitioned so a full table scan was performed by the DBEngine per partition. In MOLAP this was achievable in parallel so we can take advantage of SQL Servers ability to execute parallel table scans more efficiently. we cant do this in tabular world (yet).
  • We need or marshal more jobs to do the same work, which doesn't help much if they are running sequentially.

So partitioning a tabular model to improve processing may do more harm than good from this perspective.

2. Partitioning to allow Incremental Processing

This is a valid use case for partitioning. note that you do not “have” to partition to get incremental processing as the “ProcessAdd” command can be used as well. This is discussed more on Marcos blog below.

http://www.sqlbi.com/articles/incremental-processing-in-tabular-using-process-add

3. Partitioning to improve Query Performance

With MOLAP cubes its a well known practise to partition a model to improve storage engine performance by:

  • Allowing for partition elimination
  • having smaller sub cube events and less storage work

Does this hold true for the tabular model? Well to test this we ran a load test of 5,000 queries on four threads against a tabular model with and without partitioning and measured the average number of queries served per second.

With MOLAP, as we partitioned the model, we saw a  21% increase in throughput from 28 Tests/Sec to 34 Tests/Sec.

image

With the tabular model we saw a 11% decrease in performance, from 17.23 Tests/Sec to 15.3 Tests/Sec.

image

So the Conclusion?

While partitioning is often a must have for the molap world it is much more limited value in the tabular world. Consider carefully if you even need to bother partitioning your model!

This is underscored in section 4.0 of the excellent Tabular Performance Tuning Guide written by John Sirmon and Co.

http://msdn.microsoft.com/en-us/library/dn393915.aspx

of course the only way to know for sure how much partitioning may hurt you is to do some performance testing, so come to one of my load testing sessions at a SQL Saturday style event Winking smile

Developing Tabular models in Visual studio can be super painful as the interface gets very “sluggish” as tables get larger. in some cases customers have told me they wait 5-10 minutes a go to say change the name of a column or add a calculation

Another of the annoying things about tabular models is that if you have a partitioned fact it can be only processed on one thread PER TABLE. If we start processing and profile  queries to SQL Server we will only see one of them, rather than one per partition or one per core as you get with MOLAP. There is pretty much no way around this ;-(

image

One solution is to consider using a smaller volume of data for development and then increasing the volume when iterative development has finished and we want more volume.

My favourite approach  so far is to use a stored procedure for each partition instead of inline SQL and then use the TABLESAMPLE to reduce the volume. We can then have a setting which we change to set the percentage of facts we want in the model. Just change the setting and reprocess to increase volume. Here is a simple example based on the Adventure Works Reseller Sales.

ALTER PROCEDURE usp_FactResellerSalesPartition
    @MinDateKey int=762
    ,@maxDateKey int=792
AS
BEGIN
    SET NOCOUNT ON;
    declare @sql nvarchar(max)
    declare @PecentSample int =1    -- set this dynamically from say a settings table
        
    set @sql = 'SELECT *    FROM [dbo].[FactResellerSales] '
    if @PecentSample<100 
        set @sql = @sql + 'TABLESAMPLE (' + convert(varchar(3),@PecentSample) + ' PERCENT) REPEATABLE (100)'
    set @sql = @sql + ' WHERE OrderDateKey  BETWEEN  @MinDateKey AND @maxDateKey'

    exec sp_executesql @sql, N'@MinDateKey int, @maxDateKey int',@MinDateKey,@MaxDateKey
END
GO
exec usp_FactResellerSalesPartition

Within visual studio we can then just call the stored procedure within the partition manager dialog box

image

An added benefit of this approach is that if we need to add a column to the fact table we only need to update the single stored procedure rather than every single hard coded TSQL statement!

Note that we “could” also use a TVF and we actually do this in a lot of customer systems, but for tabular models I choose to use a stored procedure as the TABLESAMPLE function is not supported within a TVF.

As a final note the TABLESAMPLE function needs to have a physical table rather than a view. This is a best practise for most models anyway, but if you are using a view you’ll need to expand the view out into the stored procedure – or use the ETL to create a physical table.

Using this approach I got my processing time in visual studio from 30 minutes to 90 seconds. Much nicer, and iterative development was a dream.

Anyone like this approach or have other suggestions – leave a comment.

Bob’s Speaking Schedule early 2014

by Bob Duffy 19. February 2014 12:38

Here’s a list of events I am speaking at this year so far. I foolishly have different topics at each event rather than delivering the same content at each event.

1) Load Testing Analysis Services. 1st march. #sqlsat278  Budapest

Ever deployed an Analysis Services cube that worked perfectly well with one user on the development server, only to find that it doesn't meet the required volumes of user concurrency? This session focuses on tools and methodology to load test Analysis Services in highly concurrent environments. Including how to locate resource bottlenecks and the appropriate configuration settings that can help improve performance. Sample source code will be supplied to help you load test analysis services. We will be focused on the MOLAP engine, but techniques are equally applicable to the tabular model.

www.sqlsaturday.com/227/eventhome.aspx

A video of an older version of this presentation at SqlBits in 2012 is available online:

http://sqlbits.com/Sessions/Event10/Load_Testing_Analysis_Services

2) Migrating to Azure and the Cloud.  29th March #sqlsat267 Denmark

So your CTO wants you to migrate your mission critical database application to the cloud. What's involved and what are the pitfalls? In one hour we will take a sample ecommerce application and database, show tools available for both schema and data migration and then demo performance and scalability impacts.

Both database and application options are discussed.

 http://www.sqlsaturday.com/275/eventhome.aspx

3) Cube Processing Deep Dive. 8th April Dublin SQLUG, Ireland

Ever wondered what actually happens when a cube is “processed”, why it takes so long, how you can configure and optimise cube processing, or what strategies people commonly use for incremental processing ?

This session provides a deep dive into cube processing for MOLAP cubes, to help understand how it works and what you can do to work with it.

Come to this session if you are working with Microsoft Analysis Services from either a development, Administrative or Architectural perspective.
Most of the content is aimed at MOLAP cubes but we will also include a brief discuss on the newer Tabular models.

http://www.eventbrite.ie/e/business-intelligence-datawarehousing-cube-processing-deep-dive-tickets-9549674333?aff=eorg

4) Optimising a dot.net Solution on the SQL Platform. 12th April  #sqlsat267 Portugal

In this case study we will take a sample dot.net ecommerce application running in a load test harness and show common performance worst practices, how to correct them and the impact on performance. During a series of live demos we aim to test just how far we can optimise transactional processing.This session is aimed to help developers and DBA’s understand the performance impact of choices made within a dot.net application and its interaction with SQL Server.

www.sqlsaturday.com/267/eventhome.aspx

As a challenge for the recent Power BI demo competition I attempted to take our entire 90 minute Irish Economic Crisis BI demo and do three things

  • Reduce it to six minutes, with a 2-3 minute making of video added on.
  • Do the entire demo using Q+A (well apart from the Power MAP bit)
  • Try and get one of every type of visualisation possible with Q&A into the demo.

On the way I learned a lot about what you could and couldn’t do with the latest addition to the Power BI Stack – the natural language visualisation tool Q&A.

Getting Started

You don’t have to do anything special to get started with Q&A. just right click your Power Pivot workbook in the Power BI App and select “Add to Q&A”.

image

Once you do that, a new type of search will appear at the top of the web application.

image

When you are in, you can ask any question using natural language, or click on a featured question.
image

While you may start using natural language very quickly you realise that the tool is keyword driven using value names, table names or column names with a few reserved words “map”,”column chart” etc. Commas in natural language help in places for lists. It helps show you words it has ignored by greying these out and also gives you a hint at how to re-express the question along with suggestions on what the next question could be.

Lets look at the different types of visualisation you can get out of Q&A

1. The Card View – Good for scalar values and more

Not much to say here, It can display multiple answers if you use a comma or the word “and”

image

I am kinda disappointed that Q&A doesn't support external images in the card view (shoot the  security teams!). This allows for some nice views in the normal version of Power View (see below)
image

 

2. The Table

Q&A treats the name of a table as a noun and each value in a table as a Proper Noun. This allows for queries like the one below.(Note the spelling mistake on “show” – we could have just typed in “Countries Europe” and achieved the same result.

image

You can customise what you want to appear as columns for a noun (or entity in IT speak), by using the Default Field list in Power Pivot

image

One limitation is if you wanted to display some default measures when displaying each country. Eg its population. To date Q&A doesn't have a way to describe an entity that crosses multiple tables. I got around this by adding calculated columns in Power Pivot to denormalise attributes to the right tables, such as adding population to the countries table.

Another limitation is I wanted to have multiple related measures coming from one table. Eg the economic indicators table may have entities such as “growth”, “debt”, and “income” which would be 2-3 measures each. Sure I could split the measure group but this would be messy model-wise.

3. Ask the Map, Ask the Map…

The map visualisation works very seamlessly.

image

One annoyance is the continual warning or privacy. Sure I get that you want to ask me if I’m ok sending my data to bing to display a bing map. But can we not have a “yes, now go away forever” rather than a “Mr Clippit” style interruption mid-demo.

image

If possible its a good idea to use the advanced reporting properties in Power Pivot to tell the model which fields contain which types of geographic data. It handled continents, countries, counties and custom groups (eg EuroZone) very well.

image

3. The Colum Chart

With the reserved words “column chart” you can force the same data to come out as a column chart, or Q&A will often choose a column chart as the appropriate visualisation.

Go Ireland on GDP per capita for 2007! (pity about 2013).

image

 

The axis always seems to start at zero. Most Data Visualisation gurus like Stephen Few (just read his latest book, “Now You See it”) like to start the axis at just below the lowest value as the purpose of  bar chart is to demonstrate differences, right ?

4. The Line Chart

The line chart lets you do both a simple series and also multiple series

image

 

Unlike the bar chart , the line chart avoids starting the axis at zero (weird – I would have started the line chart at zero and the bar at a staggered value).

image

The line chart does  seem to start the Y axis at a non zero value unlike the column chart which doesn't make sense to me, but it does help make better use of the screen space.

4. The Trellis Chart

Nice !

image

4. The Scatter Chart (The Holy Grail of visualisations)

Great visualisation if used right. The best way to use an animated scatter seems to be to think of it as a “Gartner quadrant” chart. E.g. for the chart below we have Govt Debt per Capita versus GDP Growth. So we can see how Ireland moved from the top left quadrant (high growth and low debt), to the far right and nearly bottom quadrant (high debt and lot growth).

In this chart, GDP growth below zero means countries are in “recession”.

image

One major obvious issue. Where's the legend dude! try as a may I could not get a legend from Q&A or find the words to express that I wanted the country as a label / color item. I hope that this is made easier in the RTM version.

Here is the complete demo video along with a 2 minute behind the scenes at the end.

Enjoy!

The Irish Economic Crisis - a tale of money loss and Power BI Q+A

Don’t Forget Synonyms!

You may find that there are multiple natural language words to describe an entity (table or column), or that the name used in the model is not quite right.

A new synonym editor has been added to Power Pivot to support editing these. In the example below we have many types of debt, but we just want “Govt Debt (€m)” to be known as “Debt”.

clip_image002

 

Two warnings:

  1. You NEED to use office 365 as downloaded from the Power BI site as opposed to deployed in another fashion (like from MSDN, or RTM Office 365 site).
  2. If you open the workbook in a version of excel that does not support synonyms you loose all the synonyms. Found this out the hard way as my desktop has office 365 RTM and my laptop had the Power BI version – i didn’t realise there was a difference till I kept loosing all the synonyms.

My Wish List for Q&A

  1. Let ME choose if I want to allow external images and then please go away and stop pestering me with warnings while I am trying to work.
  2. Add Q&A into excel, this helps us folk who use iterative development. E.g. we don't have to keep uploading the workbook to see the result.
  3. Allow reports to be saved as first class Power View reports
  4. Allow Column charts to have a staggered axis like line charts (in normal Power View and Q&A)
  5. Show me the legends on scatter charts, or let me choose with a “legend” keyword.
  6. Allow control of the “colour” attribute in scatter charts. Maybe with a reserved word like “color as “
  7. Allow me to make a report with the designer and then show me the language that would have made the same report.
  8. The Q&A window seems to only work well at 1,280. Projectors of the world are still mainly 1024!
  9. Add microphone support like Google search
    image

If EVER there was a feature that needed voice recognition, then Q&A is it!

 

Behind the Scenes

For those of you who don’t want to see the actual demo video here is a 90 second behind the scenes look at a Power BI project

Page List

Page List