SQL Centre of Excellence

On the back of an awesome Sql Saturday here in Dublin I have been inspired/motivated set myself a goal of blogging much more than I have been doing lately (So just one blog post hits that goal as I havnt been blogging!)

What better topic to discuss but the recent release of SQL Server 2014 CTP1

For anyone who has not been following, SQL Server is adding In Memory OLTP, aka Hekaton to the SQL Server arsenal. At a high level, the idea behind Hekaton is to deliver databases and data structures optimized to operate completely in memory, as opposed to all previous versions of SQL Server which were designed from a Disk Storage perspective. A much better description & in depth view of the workings of Hekaton are available in Kalen Delaneyspaper on the subject

Installation was pretty standard SQL Server GUI installer, but with one caveat, you cannot install CTP 1 on a sever with any previous version of SQL Server. In my haste to get it installed I hit this issue. A quick search turned up Aaron Bertrand’s blog explaining that this is as per design, the recommendation is to install on a new VM.

Creating In Memory Databases & Tables
The syntax for creating an in memory capable database is as follows:

( NAME = N'MemOptDB', FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL\DATA\MemOptDB.mdf' , SIZE = 1048576KB),
( NAME = N'MemOptDB_File', FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL\DATA\MemOptDB_File.ndf' , MAXSIZE = 1048576KB)
( NAME = N'MemOptDB_log', FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL\DATA\MemOptDB_log.ldf' , SIZE = 1341696KB )

Pretty standard except for the phrase CONTAINS MEMORY_OPTIMIZED_DATA

Once a Memory optimized database is created, a memory optimized table can be added:

Col1 bigint  not null ,
Col2 char(4000) null ,
Col3 char(4000) null 

A few things to note here:

  • The table must be flagged as Memory_Optimized.
  • The durability can be SCHEMA_ONLY, in which case any data is lost after a server restart, or SCHEMA_AND_DATA, persisting both schema and data.
  • In memory tables must have a primary key.
  • Clustered indexes are not supported, so the primary key needs to be defined as a nonclustered index.
  • The index is not a standard B-Tree, but a new memory optimized hash index.

If you are trying to populate the table with some unimportant test data, you cannot use the standard trick of INSERT INTO TABLE DEFAULT VALUES, as in memory tables do not support default constraints. It is worth noting that IDENTIY columns are also currently not supported.

To get a quick view of the potential performance increase we decided to try a simple table scan of a standard table and an in memory table.


50,000 rows were inserted into each table and each table was queried twice, once with a cold buffer, and once with a warm buffer



Note that SET STATISTICS IO ON returns nothing for the query on the in memory table, perhaps indicating the underlying structure is very different and cannot be read with SET STATISTICS IO.


With a cold buffer the in memory table scan was 6x quicker than a scan of  a standard table.
With a warm buffer the improvement dropped to 3x.

I expect that when I start playing with more complex queries, heavy transaction loads particularly when locking and blocking come into play, there will be a much wider gap in performance, but that's for another day.


That's a super quick skim over the new in memory tables in SQL 2014. We can see that there is significant performance improvement to be had against querying standard disk based tables.

We haven't even scratched the surface here, but will hopefully there is enough information, and links to some great sources to get you tinkering.

Thanks for reading!

blog comments powered by Disqus

Page List

Page List