Tag Archives: SQL

SQL71567: Filegroup specification issue with SQL projects in VS 2012

If you’re seeing the above error, chances are you’ve got a filegroup specification on both your table and and clustered, or non-clustered inline index. This creates a compile time error as at the SSDT update of September 2012, and left me scratching my head until I saw this forum thread.

To fix the issue, I’ve created a PowerShell script that parses each of the .table.sql files in your database project directory (and subdirectories) to look for a filegroup specification. It then parses the related index files to check for a duplicate specification, and as per the recommendation in the previous forum thread – removes the table script specification. 

The script is available as a gist.

Feedback and forks welcome.

Advertisements

Implementing concurrency with timestamp (row versioning)

Having just written up some basic concurrency handling in the services I’m creating I thought I might post a few things I came across on the way.

1. timestamp != date or time

The timestamp data type in SQL server is not a date or time. In fact, it’s not even related. It’s a binary field that increments each time a row is changed. This means it’s really just what it’s synonym – rowversion suggests it is. A row version. If you want to use it, it maps to the .Net SqlBinary type. More on this later…

2. Error severity level

I found getting a description of the error severity levels notoriously difficult. Using the SQL RaiseError function with error level 1 causes the error to be swallowed as informational within the action, rather than raising as an exception to the framework. This can be quite frustrating at first but I found a good guide to error handling in SQL server here. This guide gave me just about everything I needed to know and I highly recommend it.

3. timestamp == SqlBinary

As mentioned, the timestamp type maps to the SqlBinary type (see here for a compleye type mapping list). If you don’t want to pass the Sql type around all over the place, use the .Value property to obtain the byte array that makes up the timestamp and keep it in your object instead. You could make this field read only as well, but considering most of the time if the user changes this field (unless they know exactly how many row changes between now and their update reaching the database) their change will cause the update to be rejected anyway.

You’ll notice in the MSDN list timestamp doesn’t map to a specific .Net framework type. Use rowversion synoym instead which maps to byte[]. I’ve read a few blogs and articles that recommend using rowversion instead, so that if the concepts diverge in future editions behaviour remains consistent. I think this is a good idea, and that the mappings chart provides a solid reason to trust these recommendations.

So there they are, a few little hints that I picked up on my way to implementing some kind of concurrency control through row versioning.

SQL Server DateTime v’s .Net DateTime – Battle of Accuracy!

This afternoon I was writing tests to validate a repository class. I wanted to make sure what went in, was the same as what came out. So I wrote a test to insert a record, then retrieve it and finally compare the two instances. The test itself is not my focus here, but what happened when I compared the returned instance against the original is the real highlight.

Part of the class that is inserted, and retrieved is a DateTime field. My original comparison looked like this…

Assert.AreEqual(payment.DateReceived, retrievedPayment.DateReceived);

The test failed. First thing I thought was – the AreEqual is not comparing the dates and therefore these will bever be equal. So, I changed the test to use the DateTime.Compare() method…

Assert.AreEqual(0, DateTime.Compare(payment.DateReceived, retrievedPayment.DateReceived));

Ok, re-run the tests. Red light. What the? Time to debug and see what’s going on. So stepping through the code I noticed something… The times are different! Not by much, in fact looking at the datetime field in the database, it looked the same. So I checked the ticks. Two milliseconds difference!

Right. I know the issue – so where does it originate. First I checked my code, both in the repository method and the stored proc. Nothing there re-setting the value. Off to Google. Here’s the reason…

From MSDN on the DateTime struct:

“Time values are measured in 100-nanosecond units called ticks, and a particular date is the number of ticks since 12:00 midnight, January 1, 0001 A.D. (C.E.) in the GregorianCalendar calendar.”

From MSDN on the SQL DateTime type:

“Accuracy – Rounded to increments of .000, .003, or .007 seconds”

So there you have it. There’s a rounding effect applied to times stored in SQL Server 2005. So now we know the issue, what can we do to avoid, or remedy it?

1. Store the DateTime value as the value of the Ticks component using the bigint type in SQL Server

2. If you don’t require the level of precision provided by the .Net DateTime struct, you could potentially ignore the difference, and/or zero the millisecond value.

The best thing about this bug? Occasionally the test would pass!!

— EDIT: Zeroing the millisecond value won’t always work. As pointed out by Tatham over MSN shortly after I originally wrote this values under a second still won’t equate. If you’re going to be comparing sub-second values I would recommend using the Ticks value anyway for a higher level of accuracy.

I was going to write precision but that would have been wrong… 🙂