On the surface, these query shortcuts are just what the name implies – a key combination that you can press to run a command or execute a stored procedure. But there’s a hidden super power: whatever text you’ve selected in SSMS when you press the keyboard combination gets appended to the shortcut statement.
So if you select the name of a table, for instance “dbo.Votes”, and press Ctrl+F1, SSMS will run:
SELECT TOP (1000) * FROM dbo.Votes
Preview the contents of a table
This allows you to create a keyboard shortcut to instantly preview the contents of a table or view.
And you can select not just the name of one table, but any other query text you want to tack on:
Preview the contents of two joined tables.
Because we’ve selected both the name of a table and the next line, pressing Ctrl+F1 in SSMS will effectively run the following command:
SELECT TOP (1000) * FROM dbo.Votes AS v
INNER JOIN dbo.VoteTypes AS vt ON v.VoteTypeId=vt.Id
You can go on to include as many joins, WHERE clauses, ORDER BY, as long as the syntax makes sense:
Remember that query shortcuts only apply to new windows, so if you change them, you’ll have to open a new window for the change to take effect.
Implicit conversions in SQL Server follow a specific, predictable order, called data type precedence. This means that if you compare or add/concatenate two values, a and b, with different data types, you can predict which one will be implicitly converted to the data type of the other one in order to be able to complete the operation.
I stumbled on an interesting exception to this rule the other day.
I did some googling to see just how simple I could make a database deployment pipeline. I keep the DDL scripts in a git repository on the local network, but I can’t use Azure DevOps or any other cloud service, and I don’t have Visual Studio installed, so the traditional database project in SSDT that I know and love is unfortunately not an option for me.
So I googled a little, and here’s what I ended up doing.
Using a local service account for your SQL Server service, your server won’t automatically have permissions to access to other network resources like UNC paths. Most commonly, this is needed to be able to perform backups directly to a network share.
Using a domain account as your SQL Server service account will allow the server to access a network share on the same domain, but if the network share is not on your domain, like an Azure File Share, you need a different solution.
There’s a relatively easy way to make all of this work, though.
You’re a consultant or remote worker, and you’re connecting into your work or client network using a VPN. But when you try to connect to the SQL Server, you get this.
There are a few simple workarounds, each with their own pros and cons.
A number of business processes require you to distribute a value over a date range. However, if your distribution keys and values don’t add up perfectly or aren’t perfectly divisible, it’s very easy to get rounding errors in your distributions.
There’s more to the VALUES clause in T-SQL than meets the eye. We’ve all used the most basic INSERT syntax:
INSERT INTO #work (a, b, c)
VALUES (10, 20, 30);
But did you know that you can create multiple rows using that same VALUES clause, separated by commas?
INSERT INTO #work (a, b, c)
VALUES (10, 20, 30),
(11, 21, 31),
(12, 22, 32);
Note the commas at the end of each line, denoting that a new row begins here. Because this runs as a single statement, the INSERT runs as an atomic operation, meaning that all rows are inserted, or none at all (like if there’s a syntax issue or a constraint violation).
I use this construct all the time to generate scripts to import data from various external sources, like Excel, or even a result set in Management Studio or Azure Data Studio.
Here’s something you can try:
Select a dataset from SSMS or Excel, copy it to the clipboard, and paste it into a new SSMS window.
Select just one of the tabs, then use the “find and replace” feature (Ctrl+H) in SSMS to replace all tabs with the text ', ' (including the apostrophes).
Now, add the text (' at the beginning of each line and '), at the end of each line. The last line obviously won’t need the trailing comma. If you’re handy with SSMS, you can do at least the leading values with a “box select”: holding down the Alt key as you make a zero-width selection over all the rows, then typing the text.
If all of this sounds like a lot of work for you, you might want to try out a little web hack that I wrote. It allows you to paste a tab-delimited dataset, just like the ones you get from Excel or the result pane in SSMS or ADS, into a window and instantly convert it into a T-SQL INSERT statement with the click of a button.
Pro tip: in SQL Server Management Studio, use Ctrl+Shift+C to copy not only the results, but also the column names!
Options
First row has headers: instead of inserting the first row of the raw data, the script uses it to map the INSERTed values to the correct columns in the destination table.
Fix nulls: Particularly when exporting from SSMS, we’ll lose information about which values are actually NULL and which ones are actually the text “NULL”. When this option is unchecked, the values will be treated as the text “NULL”, when checked, all values that consist entirely of the text “NULL” will have the surrounding apostrophes removed, so they become actual NULL values.
Pretty: adds some indenting spaces to the output code. This increases the script size by a few bytes, but increases readability.
Table name: Option table name to put in the INSERT INTO header of the script.
And to make sure you sleep well at night, the entire process on table.strd.co happens in the browser – nothing is ever uploaded to the Internet.
Here’s a quick tip that touches on one of the powerful SSMS tricks in my “Management Studio Level-Up” presentation. Say you have a potentially large number of database objects (procedures, functions, views, what have you), and you need to make a search-and-replace kind of change to all of those objects.
You could, of course, put the database in source control and use a proper IDE to replace everything, then check your code back into source control and commit it to the database. That’s obviously the grown-up solution. Thanks for reading this post.
But let’s say for the sake of argument that you haven’t put your database in version control. What’s the lazy option here?
Date and time values are not entirely intuitive to aggregate into averages in T-SQL, although the business case does arguably exist. Suppose, for instance, that you have a production log with a “duration” column (in the “time” datatype), and you want to find the totalt or average duration for a certain group of items.
It’s possible, but I would still call it a workaround.
I’ve been working on a little gadget for a while now, and today I finally got around to completing it and so now I’ve published it for everyone to try out. It’s a web API (wait, wait, don’t go away – it’s for database people!) that creates a randomized list of names, addresses, etc.
In this post, I’ll show you how easy it is to use this service to anonymize a development or test database so you don’t have all that personally identifiable information floating around.