Erland’s Sommarskog adds another gem to his treasure chest

February 21st, 2011

There are probably no articles that I reference more often than Erland’s. There are questions on forums, discussion groups and stackoverflow that come back over and over, again and again. For some of these topics Elands has prepared articles that are, ultimately, the article to point anybody for more details on that topic. His coverage is deep, the articles are well structured, and they pretty much exhaust the topic. If somebody still have questions about that topic after reading the article, then I say that person knows too much about SQL Servero … or not enough. I’m writing this blog entry not so much as to announce his latest addition Slow in the Application, Fast in SSMS? Understanding Performance Mysteries (is hard for me to believe anyone that follows my blog does not know about them…) but more to create for myself a memo pad from where to pick up the link to the articles whenever I need them:

The Curse and Blessings of Dynamic SQL
Dynamic-SQL, the technique of generating and execute SQL code on-the-fly is sometimes irreplaceable, sometimes abused, and almost always done in a SQL-injection prone, dangerous, way. This article covers the pros and cons of this technique, when to use and when not, how to use it and what pitfalls to avoid.
Arrays and Lists in SQL Server
In lack of a true array type in the Transact-SQL language, developers have resorted to all sort of workarounds to pass sets of data as parameters to Transact-SQL procedures and batches: strings containing comma delimited items, XML variables, temp tables, table value parameters. Erland has a series of articles covering this topic in SQL Server 2008 (where table valued parameters are available), SQL Server 2005 (where XML data type became available) and SQL Server 2000 (the Dark Age).
How to Share Data Between Stored Procedures
All you need to know about how to share data between procedures and how to choose among the various solutions: table valued parameters, table valued functions, temp tables, XML data type, INSERT-EXEC, OPENQUERY, cursor variables, CLR.
Error Handling
Another series that covers error handling in Transact-SQL in SQL Server 2005 and later using TRY-CATCH blocks as well as SQL Server 2000 before TRY-CATCH blocks were available.
Giving Permissions through Stored Procedures
Code signing is, by a large margin, the perfect way to effectively grant granular control to non-privileged users for specific operations in SQL Server 2005 and later. This powerful mechanism is though seldom used, because of the arcane ‘black magic’ that goes into getting the sequence of operations correct to effectively sign a procedure, and because of the complication that arrise from the EXECUTE AS impersonation context required for signed procedures. This article goes to great length in shading some light on this esoteric topic.
Slow in the Application, Fast in SSMS?
This article describes parameter-sniffing, the process of ‘tuning’ a query plan to a specific value of the input parameters, and how this process can sometime ‘pollute’ the query plan cache with a plan that is optimized for a specific set of parameters, but performs poorly on other parameter values. The article describes what is parameter sniffing, how to identify when it happens, and how to fix it and prevent it.

This server supports version 662 and earlier…

November 23rd, 2010

A new error started showing up in SQL Server 2008 SP2 installations:

The database cannot be opened because it is version 661. This server supports version 662 and earlier. A downgrade path is not supported.

661 sure is earlier than 662, so what seems to be the problem? This error message is a bit misleading. SQL Server 2008 supports database version 655 and earlier. But with support for 15000 partitions in SQL Server 2008 SP2, databases enabled for 15000 partitions are upgraded to version 662. This upgrade is necessary to prevent an SQL Server 2008 R2 instance from attaching a database that has more than 1000 partitions in it, since the code in R2 RTM does not understand 15000 partitions and the effects would be unpredictable. So SQL Server 2008 SP2 does indeed support version 662, but it does not support version 661. This behavior is explained in the Support for 15000 Partitions.docx document, although the database versions involved are not explicitly called out.

So the error message above should be really read as:

The database cannot be opened because it is version 661. This server supports versions 662, 655 and earlier than 655. A downgrade path is not supported

With this information the correct resolution can be achieved: the user is trying to attach a SQL Server 2008 R2 database (v. 661) to an SQL Server 2008 SP2 instance. This is not supported. User has to either upgrade the SQL Server 2008 SP2 instance to SQL Server 2008 R2, or it has to attach the database back to a R2 instance and copy out the data from the database into SQL Server 2008 instance database, eg. using the Import and Export Wizard.

TRY CATCH THROW: Error handling changes in T-SQL

November 22nd, 2010

When SQL Server 2005 introduced BEGIN TRY and BEGIN CATCH syntax, it was a huge improvement over the previous error handling based on @@ERROR check after each statement. Finally, T-SQL joined the rank of programming languages, no more just a data access language. Experience has shown that exception handling leads to better code compared to error checks. Yes, SEH is slower, but is basically impossible to maintain the code discipline to check @@ERROR after every operation, so exception handling is just so much easier to get right. And besides, @@ERROR never had such a masterpiece article to guide you trough like A Crash Course on the Depths of Win32™ Structured Exception Handling.

But when trying to use the new TRY/CATCH exception handling in T-SQL code, one problem quickly became apparent: the CATCH block was masking the original error metadata: error number/severity/state, error text, origin line and so on. Within a CATCH block the code was only allowed to raise a *new* error. Sure, the original error information could be passed on in the raised error message, but only as a message. The all important error code was changed. This may seem like a minor issue, but turns out to have a quite serious cascading effect: the caller now has to understand the new error codes raised by your code, instead of the original system error codes. If the application code was prepared to handle deadlocks (error code 1205) in a certain way (eg. retry the transaction), with a T-SQL TRY/CATCH block the deadlock error code would all of the sudden translate into something above 50000.

Read the rest of this entry »

AlwaysOn: High-Availability and reads Scale-Out

November 11th, 2010

Along with SQLPASS Summit 2010 announcements on SQL Server “Denali” features the MSDN site has published preliminary content on some of these features. Going over the “HADR” Overview (SQL Server) content we can get an early idea about this feature. This post summarizes the AlwaysOn technology, and compares it with its predecessor and close cousin, Database Mirroring. For brevity, I am intentionally omitting a lot of details.

The AlwaysOn technology in SQL Server “Denali”, also known by the project name “HADR” and often called Hadron, it is a huge improvement over its predecessor, Database Mirroring. Like Mirroring, AlwaysOn is also based on physical replication of database by shipping over the transaction log. In fact, it is not only similar to Database Mirroring but actually using the DBM technologies to replicate the database. The steps to set up AlwaysOn contain the steps to set up a Mirroring sessions, and the mirroring endpoints, catalog views and DMVs are still used to set up and monitor AlwaysOn. But AlwaysOn brings three more Aces to the table to make an unbeatable play:

Read the rest of this entry »

SqlDependency based caching of LINQ Queries

August 4th, 2010

Query Notifications is the SQL Server feature that allows a client to subscribe to notifications that are sent when data in the database changes irrelevant of how that change occurs. I have talked before about how Query Notifications works in the article The Mysterious Notification. This feature was designed specifically for client side cache invalidation: the applications runs a query, gets back the result and stores it in the cache. Whenever the result is changed because data was updated, the application will be notified and it can invalidate the cached result.

Leveraging Query Notifications from the managed clients is very easy due to the dedicated SqlDependency class that takes care of a lot of the details needed to be set up in place in order to be able to receive these notifications. But the MSDN examples and the general community know how with SqlDepenendency is geared toward straight forward usage, by attaching it to a SqlCommand object.

Read the rest of this entry »

Remote Desktop Manager now available

June 29th, 2010

Microsoft internal folks have been used for years a little tool called the Remote Desktop Manager. This tool allows you to save connection settings for frequently used machines you remote into. You can group the servers, save the credentials used for each server or for each group. It allows you to tile and monitor multiple remote desks at once, and overall is a wonderful tool for anyone using remote desktops frequently.

Fortunately now the tool is available publicly for download from the Microsoft Download site.

High Volume Contiguos Real Time Audit and ETL

June 11th, 2010

Tomorrow at SQL Saturday #43 in Redmond I’ll be presenting a session documenting the real-time audit and ETL the Microsoft uses to manage the access policies on its network. This presentation goes over some of the challenges posed by a contiguos, non-stop, high trhoughput stream of audit records poses when loading the audit into a data warehouse. I’m posting here the slides I’m going to present, if you plan to attent you can preview them right now, and come to the session with an educated set of questions for your specific case.

Effective Speakers at Portland #devsat and #sqlsat27

May 11th, 2010

Which is faster: ++i or i++?

In 1998 I was looking to change jobs and I interviewed with a company for a C++ developer position. The interview went well and as we were approaching the end, one of the interviewers asked me this question: which is faster ++i or i++? I pondered the question a second, then the other interviewer said that is probably implementation specific. The first one corrected him that i++ must return the value before the increment therefore it must make a copy of itself, while ++i returns the value after the increment therefore does not need to make a copy of itself, it can return itself. With this my chance to actually answer the question and their chance to see how I approach the problem were gone, but the interview was finished anyway as we were out of time. I got the offer from them, yet I ended up with a different company. But that question lingered in my mind, I though what a clever little thing to know. Few months later I got my hands on the Effective C++ book by Scott Meyers, and this opened my appetite for the follow up book More Effective C++. And there it was, item 6 in More Effective C++: Distinguish between prefix and postfix forms of forms of increment and decrement operators.

These two books were tremendously important in forming me as a professional C++ developer. They got me starting in studying C++ more deeply, beyond what I had to use in my day to day job. I ended up taking a Brainbench C++ test and I scored in the top 10 worldwide, which pretty soon landed me an email from Microsoft recruiting. The rest, as they say, is history.

SQL Saturday #27 is going to be held on May 22 in Portland and will share the venue with Portland CodeCamp. The list of speakers is really impressive, and amongst them, you guessed, is Scott Meyers presenting CPU Caches and Why You Care. There are many more fine speakers and interesting topics for every taste, and the event is free. Is worth your time if you’re in the area, and well worth a trip to the City of Roses if you’re not.

I myself will be presenting a session on High Volume Real Time Contiguous ETL and Audit.

To register with the CodeCamp and SQL Saturday events go to http://devsat.eventbrite.com

What is Remus up to?

April 1st, 2010

Some of you already know this: I am again FTE with Microsoft. I was in a contract for a very interesting project with Microsoft IT over the past months in a vendor position, and getting back in touch with the cool stuff that goes on inside Microsoft kindled back the passion for bold projects with big impact. Since March 29 I’m working again as a developer with what is officially known as the SQL RDBMS Core Team. I’m no longer involved with the Service Broker though, I now work on the Access Methods. Indexes, BTrees, Heaps and lets not forget DBCC. Fun stuff.

bugcollect.com: better customer support

October 28th, 2009

I am a developer, I write applications for fun and profit, and I’ve been doing this basically my whole professional life. Over the years I’ve learned that it is important to understand the problems my users face. What are the most common issues, how often do they happen, who is most affected. I have tried the approach of logging into a text file and then asking my users to send me the log file. I’ve tried sending mail automatically from my application. It was useful, but my inbox just doesn’t scale to hundreds of messages that may happen after a … stormy release.

This is why I have created for myself an online service for application crash reporting. Applications can submit incident reports online and the service will collect them, aggregate them and do some initial analysis. I have been using this service in my applications over the past year and I think that if I find it so useful, perhaps you will too. So I’ve invested more resources into this, made it into a commercial product and put it out for everyone:http://bugcollect.com.

After an application is ready and published, bugcollect.com offers a private channel for collecting logging and crash reporting information. bugcollect.com analyzes crash reports and aggregates similar problems into buckets, groups incidents reported by the same source, helping the development team to focus on the most frequent crashes and problems. Developers get immediate feedback if a new release has a problem and they don’t have to ask for more information. Developers can also set up a response to an incident bucket, this response will be sent by bugcolect.com to any new incident report that falls into the same bucket. The application can then interpret this response and display feedback to the user, eg. it can instruct him about a new download available that fixes the problem.

bugcollect.com reporting differs from system crash reporting like iPhone crash, Mac ‘send to apple’ or Windows Dr. Watson because it is application initiated. An application can decide to submit a report anytime it wishes, typically in an exception catch block. All reports submitted to bugcollect.com are private and can be viewed only by the account owner, the application development team.

bugcollect.com features a public RESTful XML based API for submitting reports. There are already available client libraries for .Net and Java, as well as appender components for log4net and log4j. More client libraries are under development and an iPhone library will be made available soon.