A fast way to create a duplicate of an SQL Server Agent job is to script it, then modify and run the script. But there’s a pitfall.
Category: SQL Server concepts
Traversing parent-child relations
In this short tutorial, we’ll look at traversing parent-child structures using recursive common table expressions, and turning the data into human-readable lists. This is a great way to represent paths and hierarchy-based data in reports and end-user outputs.
SARGable expressions and performance
Understanding the concept of SARGable expressions can dramatically help you speed up query performance. Here’s how:
About the spool operator
The spool operator is one of those icons that will show up in a query plan now and then. In this post, we’ll take a closer look at what it does in a query plan, how it affects performance, and what you can do to substitute or eliminate it.
Rebuilding a transaction log file
The database’s transaction log file contains, like the name implies, a log of all the transactions happening in the database. If you’ve set your database to the “simple” recovery model, the log is truncated (emptied) at every checkpoint (more or less all the time). In “bulk logged” or “full” recovery model, you’ll have to truncate the log files yourself, by performing a transaction log backup.
So because of this, the log file has to stay in sync with the rest of the database at all times. But what do you do if your log file is lost or corrupted? Luckily, there’s a simple way to recover it.
Efficient data, part 5: Datatypes
This week’s installment in the series on efficient data is all about data types. Be smart about choosing them; The right type for the right job will not only store your data in less physical space and enforce data integrity better, but smaller indexes and tables will also improve overall query performance.
Efficient data, part 4: Fill factor
A quick way to improve I/O performance is to set a good fill factor. This installment of the series on efficient data discusses fill factors and page splits, and how they affect physical I/O performance.
Efficient data, part 3: Compression
This is the third part in a series of articles on optimizing large volumes of data. This part is all about using compression, and since I wrote an article on that a few months ago, I’m not going to re-invent the wheel. So this article is a repost of my july article on compression.
Efficient data, part 2: Partitioning
This is the second part in a series on storing and modelling data efficiently. A great way to add performance to your data is to partition it. Like the name implies, partitioning splits a table or index into multiple partitions, so the data can be stored across multiple physical files and drives. Partitioning is a feature of SQL Server Enterprise Edition, but if you have one, you’re in luck!
No explanation required.
I’ll just leave this link right here. Good stuff, no further explanation required.