Here’s some common questions I am asked on TempDB
Question 1) How Big should my TempDB Be ?
On an existing system this is quite a simple question to answer. by default the TempDB starts at 8MB and grows by 10%, so we just look at the size it has grown to and now we know how big it needs to be
On a “suck it and see” approach the only consideration is to ensure that file growth is not left at the default values so we can limit “growth” events. Consider growing in 50MB, or 100MB+ increments, depending on the size of the databases.
Once we know how big it can grow to, it is really important to go back to the database size settings and correctly size the TempDb database. Bear in mind, when the instance is restarted the TempDB will be recreated back to the original size specified. This can lead to “Monday Blues” following a reboot on a Sunday. Warning signs for consultants are customer statements like “the system is slow every Monday morning”
What if you don’t have an existing workload, e.g. you are doing capacity planning for a new solution that has not been tested yet. Well, a few underpinning questions to ask:
1) How is that largest table or index or partition that will be rebuilt. If this is sorted in TempDB, then this is a minimum size plus a bit more.
2) Do we make use of any new features that use a lot of TempDB such as snapshot isolation or lots of large sorts http://msdn.microsoft.com/en-us/library/ms345368.aspx
3) if we have the database with sample data we can run DBCC CHECKDB WITH ESTIMATEONLY to estimate the amount of size needed for consistency checking
Question 2) How many files do I need for TempDB
This is always a contentious question, even amongst the SQL community. Microsoft do recommend one file per core http://msdn.microsoft.com/en-us/library/ms175527.aspx
On greater than eight cores, I don’t usually go that far on recommendations as this can be harder to manage and make sure that growth in all files is even. On most systems I see 2-4 files is more than enough to avoid concurrency issues.
Paul Randal also recommends avoiding that 1-1 ratio for most systems: http://www.sqlskills.com/BLOGS/PAUL/post/A-SQL-Server-DBA-myth-a-day-%281230%29-tempdb-should-always-have-one-data-file-per-processor-core.aspx
However, there is the expression “No one ever got fired for having too many files for TempDB”. Too many files is likely to hurt less than not enough, but the point is it can hurt…
Question 2) Why is my TempDB so Huge!
Well there are three likely suspects which we need to eliminate from enquiries:
a) User objects: developers specifically creating temporary objects
b) Internal objects: hashing and sorting for monster queries or queries with bad plans and lack of indexes.
c) version store: for snapshot isolation and triggers maintaining multiple versions of rows in TempDB
The DMV sys.dm_db_file_space_usage will show which one is the culprit. here's a sample query that shows total 8k pages and how they are broken down
user_object_perc = CONVERT(DECIMAL(6,3), u*100.0/(u+i+v+f)),
internal_object_perc = CONVERT(DECIMAL(6,3), i*100.0/(u+i+v+f)),
version_store_perc = CONVERT(DECIMAL(6,3), v*100.0/(u+i+v+f)),
free_space_perc = CONVERT(DECIMAL(6,3), f*100.0/(u+i+v+f)),
[total] = (u+i+v+f)
u = SUM(user_object_reserved_page_count)*8,
i = SUM(internal_object_reserved_page_count)*8,
v = SUM(version_store_reserved_page_count)*8,
f = SUM(unallocated_extent_page_count)*8