A fast way to create a duplicate of an SQL Server Agent job is to script it, then modify and run the script. But there’s a pitfall.
Category: By difficulty
Traversing parent-child relations
In this short tutorial, we’ll look at traversing parent-child structures using recursive common table expressions, and turning the data into human-readable lists. This is a great way to represent paths and hierarchy-based data in reports and end-user outputs.
SARGable expressions and performance
Understanding the concept of SARGable expressions can dramatically help you speed up query performance. Here’s how:
Calculating business days and holidays
A common scenario you may have encountered is the need to calculate the first business day after a given date. There are quite a few ugly ways to solve this, including cursors, but there are also some pretty neat ways to approach the problem, and as a bonus, you’ll learn about recursion and the new LEAD(), LAG() functions and accumulation in T-SQL.
About the spool operator
The spool operator is one of those icons that will show up in a query plan now and then. In this post, we’ll take a closer look at what it does in a query plan, how it affects performance, and what you can do to substitute or eliminate it.
Rebuilding a transaction log file
The database’s transaction log file contains, like the name implies, a log of all the transactions happening in the database. If you’ve set your database to the “simple” recovery model, the log is truncated (emptied) at every checkpoint (more or less all the time). In “bulk logged” or “full” recovery model, you’ll have to truncate the log files yourself, by performing a transaction log backup.
So because of this, the log file has to stay in sync with the rest of the database at all times. But what do you do if your log file is lost or corrupted? Luckily, there’s a simple way to recover it.
Efficient data, part 6: Versioning changes
This installment in the series on efficient data is on versioning changes in a table. The article is a re-post of a post I wrote in september on compressing slowly changing dimensions, although the concept does not only apply to dimensions – it can be used pretty much on any data that changes over time.
The idea is to “compress” a versioned table, so instead of just adding a date column for each version, you can compress multiple, sequential versions into a single row with a “from” date and a “to” date. This can significantly compress the size of the table.
Efficient data, part 5: Datatypes
This week’s installment in the series on efficient data is all about data types. Be smart about choosing them; The right type for the right job will not only store your data in less physical space and enforce data integrity better, but smaller indexes and tables will also improve overall query performance.
Efficient data, part 4: Fill factor
A quick way to improve I/O performance is to set a good fill factor. This installment of the series on efficient data discusses fill factors and page splits, and how they affect physical I/O performance.
Efficient data, part 3: Compression
This is the third part in a series of articles on optimizing large volumes of data. This part is all about using compression, and since I wrote an article on that a few months ago, I’m not going to re-invent the wheel. So this article is a repost of my july article on compression.