SQL Centre of Excellence

An issue was recently brought to our attention whereby a package was for all intents and purposes in full working order, could be executed by the developer from Visual Studio or Command Line, yet it could not be executed by the SQL Agent.

Unfortunately due to 3rd party components, the error received was a little cryptic but after a bit of troubleshooting we found the issue was due to the Package Protection Level. As we experienced this issue in a real world production environment, it strikes me as a topic that may benefit from a good blogging!

 

What are Package Protection Levels?

Simply put, package protection levels control who can open and execute SSIS packages. For example, you may want to password protect a package so it cannot be executed without providing a password, or you might want to ensure only certain users can execute the package.

 

The individual protection levels are:

DoNotSaveSensitive Does what it says on the tin, sensitive data is not saved with the package, such as passwords in connection strings.
EncryptAllWithPassword The whole package is encrypted. To open or run the package the user must supply a password.
EncryptAllWithUserKey The whole Package is encrypted using a key that is based on the current user profile. Only the user who created or exported the package can open or run the package.
EncryptSensitiveWithPassword Only sensitive parts of the package are encrypted. Anyone can open the package, but without the password all sensitive data is blank. Password is needed to run the package.
EncryptSensitiveWithUserkey Only sensitive parts of the package are encrypted with a key based on current user profile. Any user can open the package but if the user is not the user who encrypted the package, all sensitive data is blanked, and the package will not run
ServerStorage Security relies on database roles. Only valid if package is deployed to MSDB

 

Where is Package Protection Set?

Package protection is set in one of 3 places:

1. At the package level in Package Properties

In the control flow screen, right click on the control flow backgorund and click properties. Package properties appear in the bottom right of Visual Studio.

image

2. At the project level (2012)

Right click on the project in solution explorer and click properties.
Note, if using multiple packages in a project they must all use the same protection level as the project, otherwise the project cannot be deployed. Note this is not available pre 2012.

image

 

3. At the Command Line
dtutil.exe /file “C:\testPackage.dtsx” /encrypt file; “C:\testPackage.dtsx”; 2; YourPassword

 

Using the Various Protection Levels

So, how can a package be executed under each of the Protection Levels?

Do Not Save Sensitive

As sensitive data is not saved with the package a user has to provide it. This is easily done using either parameters or a configuration file.

EncryptAllWithPassword / EncryptSensitiveWithPassword

A user must provide a password along with the command to execute the package:

DTEXEC.exe /f “C:\testPackage.dtsx” /De YourPassword

EncryptAllWithUserKey / EncryptSensitiveWithUserKey

This mode is quite restrictive and unless you are the person who deployed/created the package the only way to run it  is by creating a Proxy Account within SQL Server that’s linked to the credentials of the developer who created/deployed the package. An Agent job step can then be RunAs the proxy account and credentials. Users can then run the package via the agent, under the credentials of the original developer.

Unfortunately there is no way to set a specific account to encrypt the package with other than logging on as the account in question and deploying the package. If anyone wants to correct me on this id be delighted to admit error, as this particular lack of functionality really grinds my gears!

 

Back to the Original Problem!

I started this blog detailing an issue we had where a package ran fine via commandline or Visual Studio but wouldnt run under the Agent. Seems pretty clear what the issue is now, right?
The protection level was set to EncryptSensitiveWithUserKey.
To quickly get around the issue we encryptedWithPassword, supplied the password in the execute command and the package was executed by the Agent. Long term utilization of a config file or some other method may be more beneficial.

 

 

As always, any questions on the content above, observations or corrections just give me a shout in the comments!


Thanks for reading!

On the back of an awesome Sql Saturday here in Dublin I have been inspired/motivated set myself a goal of blogging much more than I have been doing lately (So just one blog post hits that goal as I havnt been blogging!)

What better topic to discuss but the recent release of SQL Server 2014 CTP1

For anyone who has not been following, SQL Server is adding In Memory OLTP, aka Hekaton to the SQL Server arsenal. At a high level, the idea behind Hekaton is to deliver databases and data structures optimized to operate completely in memory, as opposed to all previous versions of SQL Server which were designed from a Disk Storage perspective. A much better description & in depth view of the workings of Hekaton are available in Kalen Delaneyspaper on the subject

Installation
Installation was pretty standard SQL Server GUI installer, but with one caveat, you cannot install CTP 1 on a sever with any previous version of SQL Server. In my haste to get it installed I hit this issue. A quick search turned up Aaron Bertrand’s blog explaining that this is as per design, the recommendation is to install on a new VM.

Creating In Memory Databases & Tables
The syntax for creating an in memory capable database is as follows:

CREATE DATABASE [MemOptDB]
ON  PRIMARY
( NAME = N'MemOptDB', FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL\DATA\MemOptDB.mdf' , SIZE = 1048576KB),
FILEGROUP [MemOptFG] CONTAINS MEMORY_OPTIMIZED_DATA  DEFAULT
( NAME = N'MemOptDB_File', FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL\DATA\MemOptDB_File.ndf' , MAXSIZE = 1048576KB)
LOG ON
( NAME = N'MemOptDB_log', FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL\DATA\MemOptDB_log.ldf' , SIZE = 1341696KB )
GO

Pretty standard except for the phrase CONTAINS MEMORY_OPTIMIZED_DATA

Once a Memory optimized database is created, a memory optimized table can be added:

CREATE TABLE InMemoryTable
(
Col1 bigint  not null ,
Col2 char(4000) null ,
Col3 char(4000) null 
PRIMARY KEY NONCLUSTERED HASH(Col1) with (BUCKET_COUNT = 200000),
) with (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_AND_DATA)

A few things to note here:

  • The table must be flagged as Memory_Optimized.
  • The durability can be SCHEMA_ONLY, in which case any data is lost after a server restart, or SCHEMA_AND_DATA, persisting both schema and data.
  • In memory tables must have a primary key.
  • Clustered indexes are not supported, so the primary key needs to be defined as a nonclustered index.
  • The index is not a standard B-Tree, but a new memory optimized hash index.


If you are trying to populate the table with some unimportant test data, you cannot use the standard trick of INSERT INTO TABLE DEFAULT VALUES, as in memory tables do not support default constraints. It is worth noting that IDENTIY columns are also currently not supported.

To get a quick view of the potential performance increase we decided to try a simple table scan of a standard table and an in memory table.

 

50,000 rows were inserted into each table and each table was queried twice, once with a cold buffer, and once with a warm buffer

 

Resultsimage

Note that SET STATISTICS IO ON returns nothing for the query on the in memory table, perhaps indicating the underlying structure is very different and cannot be read with SET STATISTICS IO.

 

With a cold buffer the in memory table scan was 6x quicker than a scan of  a standard table.
With a warm buffer the improvement dropped to 3x.

I expect that when I start playing with more complex queries, heavy transaction loads particularly when locking and blocking come into play, there will be a much wider gap in performance, but that's for another day.

 


Conclusion
That's a super quick skim over the new in memory tables in SQL 2014. We can see that there is significant performance improvement to be had against querying standard disk based tables.

We haven't even scratched the surface here, but will hopefully there is enough information, and links to some great sources to get you tinkering.


Thanks for reading!

Firstly, Hello all!

I’ve decided that after years of learning and “borrowing” ideas, info and techniques from other peoples blogs, it might be about time

I put one up and share some of the interesting, useful or just downright odd things I have come across in my years working in IT and particularly with SQL Server.

Hopefully you will find some of these blogs useful, interesting, or at the very least give you something to argue with me over!

Back Story – I work as a SQL Consultant for Prodata, who are the top SQL Server & BI consultants in Ireland (shameless plug!) .

My main area of expertise is as a production DBA, but I also dip into BI, SSIS, Office (well, excel) a little virtualization and general IT work.

 

Today’s blog is concerned with SQL Server Views, or more the issues that are encountered when using views, and how to sidestep them.

 

For this example I am using a subset of the DimCustomers table in AdventureWorksDW2012, available on codeplex. It’s a contrived example but gets the message across.

On running a select * from our Customers table, the results look like this:

Imagine we want to expose this table to certain users/groups via a view. This is something we do pretty regularly when exposing Data Warehouse tables via views to help minimize effect of base table changes on reports:

Create View CustomerInfo

AS

Select from Customers 

GO

It’s probably better to try not to use select * in views in general, but during development it can save a lot of time when schemas are changing regularly.

Running a select * from the view returns the results as shown previously, the view is after all just a select * from the underlying table.

 

Now, we have a request to alter the Customers Table, so that TotalChildren will be before the BirthDate column, Phone will be before AddressLine1 and LastName will be changed to Surname:

 

Pretty easy change right? But then look what happens if you execute the view:

 

The column data for TotalChildren & Phone is in the correct position, however the column headers are completely incorrect!  The birthdate column has the data for TotalChildren underneath it, AddressLine1 has the Phone number data underneath it and LastName has not been updated to Surname!

 

So what has happened here?

A schema change to the base table does not force a metadata update on the view. If we query the view metadata we see the column order and names reflect those of the table before the table schema change:

The issue is easily resolved by one of two ways:

·        Drop and recreate the view

·        Execute sp_refreshview

Sp_refresh view is usually the preferred solution, and can be run against a specific view or against all views.

This issue can be completely prevented by creating the view with schemabinding, however it can become quite annoying during development to have to remove schema each time a schema change is required, and a refresh of the schema metadata is still required,

however this will happen when you alter the view to re-add schemabinding.

 

Attached is a short script to demo items discussed.

 

That’s the first blog out of the way! Phew! Any questions, please just give me a shout.

Fintan.

view_metadata_issues_examples.sql (1.71 kb)

 

Page List

Page List