Friday, January 28, 2011

A pet peeve

SQL Server has over 400 wait stats available via the sys.dm_os_wait_stats dmv. The most common are fairly documented. However, try to find some help if you see high counts of OS_PREEMPTIVE_ENCRYPT_MESSAGE. You'll find PLENTY of "help" sites that will tell you "SQL Server has all these new cool waits" then list each and everyone of them but no explanation.

In short, if you don't know what something is, don't post it in your blog. So to avoid being my OWN pet peeve, here's at least what I discovered regarding the wait type mentioned above. Client uses SSL connections with their JDBC drive but NOT with SQL Server. Therefore, when SQL returns a result to the application, it makes an OS call to encrypt the data using the server's certificate rather than SQL Server's certificate.

Thursday, January 13, 2011

SQL Server Denali: SEQUENCES

I'm primarily a SQL Server DBA although I've worked with Sybase, MySQL and Oracle throughout my career. As such, in my current consutling role, I'm often called upon to work with companies when migrating from one platform to SQL Server. When migrating Oracle to SQL Server some functionality does not map. For intsance, Oracle's SEQUENCE tables. While SQL Server does have IDENTITY columns the functionality is not the same. Therefore, in order to do a full migration, you must implement your own versions of Oracle's NEXTVAL and CURVAL functions.

One of the new features in SQL Server Denali is the SEQUENCE table.
The functionality is very simlar to Oracle's implementation. It has min and max values, a start with value and an increment and of course, a NEXT VALUE FOR. The SEQUENCE can also be cached. That is it will reserve the next N values for the SEQUENCE in memory. When the max value is reached, or the cache is exhausted, it generates the next N values.

I'll provide some sample code and use cases for this construct in my next post.

Wednesday, January 5, 2011

SQL Denali: New Feature

I was going through the CTP of Denali today. That's the upcoming release of SQL Server. One of the new T-SQL enhancements in the engine is PAGINATION. That is, scrolling or limiting results returned to the application. I first encountered issues with this in 2006 when migrating a PHP web app with a MySQL backend to SQL Server 2000. The developers had used the MySQL construct LIMIT to restrict the rows returned to the application.

SELECT Customer_LastName, Customer_FirstName, City, State
FROM Customers
WHERE Customer_LastName like 'bren%'

This would return only the first 5 rows. The developer asked how to do this with SQL Server 2000. It wasn't easy but using a combination of TOP and ORDER BY along with subqueries we go what we wanted. A year later, we upgraded to SQL Server 2005 and were able to take advantage of the ROW_NUMBER() OVER() function.

SELECT * from (
SELECT ROW_NUMBER() OVER( ORDER BY Customer_LastName) as ROWID, Customer_LastName, Customer_FirstName from Customers where Customer_LastName like '%bre%'

Now comes the OFFSET CLAUSE in ORDER BY.
This first example returns all rows AFTER the first 5.

SELECT DepartmentID, Name, GroupName
FROM HumanResources.Department

This example returns rows 1 through 5

SELECT DepartmentID, Name, GroupName
FROM HumanResources.Department
ORDER BY DepartmentID

Much simpler! Looking forward to adding new Denali features (HADRON and readable mirros\replicas, and SEQUENCES (sound familiar you Oracle peeps?).

Tuesday, January 4, 2011

Tuning and Algorithm Analysis Part II

In part one of this series I gave a quick, high level explanation of Algorithm Analysis and "Big O", O(n), notation. Today I will continue and show the impact iterative logic can have on a database, particularly within a lengthy, complex stored procedure.

I'll use a design pattern I see frequently when working with clients on slow queries. That is, the WHILE loop. I've come across many people who say "We know they're bad so we don't use CURSORS". Well, a WHILE loop is just as bad.

So let me set up the first example.
The task was to update the location for "units" under an organization. It was performed in 2 separate WHILE loops.

CREATE TABLE #hierarchy( rowid int identity(1,1) not null, UnitId, etc)

WHILE @hasdependent = 'TRUE'
INSERT INTO #hierarchy
SELECT Col1, Col2, etc
WHERE ParentID = ChildId
SELECT @NumRows = @NumRows + 1

WHILE @LoopCnt <= @NumRows BEGIN SELECT @UNIT_ID = UnitId from #hierarchy WHERE rowid = @LoopCnt UPDATE UnitTbl SET Location = @LOC WHERE UnitId = @UNIT_ID SELECT @LoopCnt = @LoopCnt + 1 END So a quick look at this algorithm shows that the first loop executes N times for the number of units under a given Org. The second loop will then execute N times resulting in a O(N+N) or O(2N). Unfortunately it did not stop there. The #hierarchy temp table was used in several other loops resulting in more like a O(M*N). So even if the SELECT that populates the temp table and the UPDATE use an index seek, the nature of the algorithm itself is inefficient and leads to excessive reads and writes. But to actually make this more efficient, the #hierarchy would typically have several thousand rows and it's not indexed. Now you add table scans degrading this further. Running their query with the SHOW CLIENT STATISTICS and the SET STATISTICS IO ON revealed the following:
Table '#hierarchy________________________________
Scan count 10, logical reads 10, physical reads 0
Table 'b_unit_equipment_item'.
Scan count 15, logical reads 23, physical reads 6
Table '#hierarchy________________________________
Scan count 10, logical reads 2, physical reads 0,
Table '#hierarchy________________________________
Scan count 10, logical reads 1, physical reads 0,
Table '#hierarchy________________________________
Scan count 10, logical reads 2, physical reads 0,
Table '#hierarchy________________________________
Scan count 10, logical reads 1, physical reads 0,
Table '#hierarchy________________________________
Scan count 1, logical reads 3, physical reads 0,

The end result of this is an inefficient algorithm combined with poor query design and for each iteration, the procedure is incurring scans and ultimately physical I\O. By replacing the first loop with a Common Table Expression (CTE) to build the hierarchy and changing the second loop to a single update that portion of the procedure the plan and I\O was reduced to this:
Table 'b_unit_equipment_item'.
Scan count 0, logical reads 3, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

In conclusion, tuning a poor performing query can involve more than just looking for bad indexes. Look for inefficient algorithms and excessive I\O. These 3 items can really create a performance bottleneck.

Monday, January 3, 2011

Tuning and Algorithm Analysis Part I

As database professionals at one time or another we are faced with tuning a poorly performing query or procedure. The Database Tuning Advisor (DTA) and profiler are both useful tools in finding missing indexes, poor joins choices and scans. However, many times it's more the design or algorithm of a lenghty procedure that is affecting performance. The most common culprit is iterative logic vs. set based logic. And that's the focus of this series of posts: Algorithm Analysis. A poor algorithm is just as bad for a query (or any program) as a table scan.

So part 1 of this series is a brief explanation of algorithm analysis.

Algorithm Notation
The most common representation of algorithm efficiency is called big O or
O(N) notation. You might see an algorithm as O(N2) or O(NlogN), etc. You may also see what's called big Theta or Θ (N). Big O is typically an upper bound and big Theta is a lower bound.

Algorithm derivation
So what determines the efficienncy of an algorithm? I'll discuss a brute force method using the bubble sort algorithm. For simplicity, I'll use standard pseudo code

    procedure bubbleSort( A : list of sortable items )
  1. n = length(A) - 1

  2. for (i = 0; i <= n; i++)

  3.     for (j = i+1; j<=n; j++)

  4.        if A[j-1] > A[j] then

  5.           swap(A[j-1], A[j])

  6.       end if

  7.     end for

  8. end for

  9. end procedure

We'll consider each line an atomic action and count the number of time executed. We'll also assume for this analysis that n=10.
Line 1 is executed once
Lines 2 and 8 executed n, or 10, times.
For each iteration of n, lines 3 through 7 are executed n-1 times.

This gives a total execution of n * (n-1) times. As N increases, we approach an upper bound of N2. For example: N= 100 then N * (N-1) = 100 * 99. For N= 1000, 1000*999.

That means as the input size doubles, the execution increases exponentially.
N=10, executions = 100
N=20, executions = 400
N= 40, executions = 1600

So regardless if this is C++, java or T-SQL , this would be an inefficient algorithm.

Part II of this series will discuss the these types of algorithms, commonly referred to as RBAR (or Row By Agonizing Row) and their impact on the database.