When you’re designing reports, they can often be based on hiearchies represented by “nodes” in a parent-child setup. To the end-user, the parent-child representation doesn’t provide very much readability, so you need to output this information in a human-readable form, for instance in a table where the names/titles are indented.
Category: Intermediate
Splitting a range string into a table
This week’s post is a requirement that I see very regularly as a developer. You get a plaintext string containing one or more ranges. Each range is comma delimited, and the start and end values of the range are separated by a dash. The string could look something like this, for example: 100-120,121-499,510,520,790-999.
Wouldn’t it be practical if we could construct a table value function that returns one row for each range, with columns for the start and end of each range?
Temp table myths
Check out this deepdive on table variables that I found the other day:
SQL in the Wild: A trio of table variables
The article deals with some common misconceptions about table variables and it’s considerably more in-depth than my post on different types of temp tables.
“Unpivot” using CROSS APPLY
I personally find the UNPIVOT syntax in T-SQL a bit unwieldy and not entirely easy to read. In this article, I’m going to show you an alternative way to perform UNPIVOT, using the CROSS APPLY operator.
Calculating median using the NTILE function
A while back, I was shown an absolutely gorgeous median calculation, using the new OFFSET/FETCH functionality introduced in SQL Server 2012. But this got me thinking. How can you calculate the median in an older SQL Server? Here’s an idea of how to do it using the NTILE() function.
Traversing parent-child relations
In this short tutorial, we’ll look at traversing parent-child structures using recursive common table expressions, and turning the data into human-readable lists. This is a great way to represent paths and hierarchy-based data in reports and end-user outputs.
SARGable expressions and performance
Understanding the concept of SARGable expressions can dramatically help you speed up query performance. Here’s how:
Rebuilding a transaction log file
The database’s transaction log file contains, like the name implies, a log of all the transactions happening in the database. If you’ve set your database to the “simple” recovery model, the log is truncated (emptied) at every checkpoint (more or less all the time). In “bulk logged” or “full” recovery model, you’ll have to truncate the log files yourself, by performing a transaction log backup.
So because of this, the log file has to stay in sync with the rest of the database at all times. But what do you do if your log file is lost or corrupted? Luckily, there’s a simple way to recover it.
Efficient data, part 6: Versioning changes
This installment in the series on efficient data is on versioning changes in a table. The article is a re-post of a post I wrote in september on compressing slowly changing dimensions, although the concept does not only apply to dimensions – it can be used pretty much on any data that changes over time.
The idea is to “compress” a versioned table, so instead of just adding a date column for each version, you can compress multiple, sequential versions into a single row with a “from” date and a “to” date. This can significantly compress the size of the table.
Efficient data, part 4: Fill factor
A quick way to improve I/O performance is to set a good fill factor. This installment of the series on efficient data discusses fill factors and page splits, and how they affect physical I/O performance.