Not giving a shit about performance is tech-debt

For practically every piece of code you develop, there will be trade-offs. Sometimes, you can combine the best of two worlds, other times it comes down to some hard choices. For T-SQL developers, it typically boils down to a few key questions:

  • How much time can you spend perfecting code instead of just shipping?
  • Can we just fix it when it becomes a problem?
  • Is buying more hardware cheaper than paying for developers to tune their code?
  • Is better code harder to read, and will a junior developer be able to work with it?

Technical debt

I have a background in building accounting software and financial BI solutions, so the first time I heard of the term “technical debt“, the comparison to financial debt was immediately clear to me. A financial investment – say you’re investing in a small startup company – is normally settled using a mix of your own money and some borrowed money. The advantage of borrowing is that you can get going, making money sooner than if you have to earn all that investment capital first – on the other hand, your lenders will want to be paid a certain interest from you on a regular basis. You won’t want to spend too much money on interest, or you won’t be able to afford the down-payments. Similarly, any software project is an investment of your time and effort into a product that you’ll want to ship some day. To ship faster, you can take a few shortcuts or leave certain features for later – these shortcuts are your technical debt, and you’re going to have to manage those going forward, every time you develop or refactor anything. This extra work is your “interest payment”. Developing permanent fixes to those shortcuts is the equivalent to the down-payments, in that this reduces your interest. So too much financial debt and you won’t be able to pay for your groceries because you’re spending all your money on interest. Too much tech-debt and you won’t be able to build or refine your code because all you do is manage incidents and build more workarounds.

Striving for perfection?

Ask anybody who has some performance tuning experience and they’ll agree that there is practically no limit as to how far you can tune a query. But the marginal gains decrease exponentially. Your first two hours of query tuning may yield as much of a relative improvement as the following two weeks.

If your ETL task runs for 20 minutes, would 60 seconds be good enough? Or do you need it to finish in 250 milliseconds? It could probably be done, but you might find the cost prohibitively high.

Proactive or reactive?

Technical debt is to a large extent reactive. The very idea of tech-debt is that you take a few risks in the future in order to ship the product now. And like all risks, they can be ok as long as you have a clear understanding of them and weigh the probability and costs of an incident. If a blip in the network or a misplaced character in a text file could crash your business reports in the middle of year-end closing, you need to be sure that you and the business understand this and are ok with it.

In fact put this in writing, in an SLA.

As for slow-running reports, queries or stored procedures. Yes, you could choose to address them once they become a problem. Just remember that performance degradation tends to happen suddenly, in thresholds. If it’s slow today, it could stall tomorrow. Don’t wait for that day when four or five simultaneous jobs are thrashing the server’s disks or perhaps even preventing you from even logging on to the SQL Server. That could very well be your Christmas day on-call. I’m not making this stuff up.

“Don’t worry, we’ll just buy more {memory|disk|cpu}”

Let’s put this one to rest right away. If you spend a few hours reading through some of the posts on this blog, I’m pretty sure you can learn how to take some of your top 10 worst-performing queries and tune them to run in the order of 10 to 100 times faster. You or your consultant might have to invest a day or two tweaking the code, but I’m pretty sure it can be done.

Now, how much do you reckon a new server with 10 times your current capacity would cost?

But elegant code is harder to read

Sure, if you apply some of the cooler T-SQL tricks, your intern or your new hire may not immediately follow what’s happening in the code. But you could offset that relatively easily by commenting your code.

Or you could encapsulate some of the trickier passages in dedicated stored procedures. Make those procedures atomic, documented, robust and tuned. That way, you might be able to keep the frequently accessed code clean and readable, while the highly complex code is separated.

And regular index tuning and maintenance won’t affect your code at all.

The bottom line

When you think of technical debt, you may think only of classic shortcuts like making assumptions about the data, not using a TRY-CATCH block or perhaps hard-coding a manual correction into a stored procedure or view.

But I would argue that not paying attention to performance is just as much a technical debt. And rather than just crashing with an error message, performance issues are not always easy to just fix in production when your business users are working late to meet their deadlines, or when your web request are timing out. Start thinking of performance as an important part of your development process – half the job is getting the right data in the right place, the other half is making sure that your solution will handle double or triple the workload, preferably under memory pressure conditions with other workloads running at the same time.

And if you don’t have the time or the inclination to tune your code right now, note it down as technical debt.

I’ll round off my tirade with this very fitting Tumblr post. Make sure to follow @DBAReactions on Twitter.

When someone says that they don’t have time to be proactive because they are too busy fighting fires. (HT @SQLSoldier)

 

4 thoughts on “Not giving a shit about performance is tech-debt

  1. Brilliant but of course if you tune code you need extremely skilled people to do so!
    Most of the host of cheap resources cannot write well documented or performant code when it comes to databases. In order to do that they must understand the cost based optimizer and how sql server works internally.
    Part of the speed of code comes from the database design and the ability to satisfy regular operations and queries. Often things like ssas and data marts need queries to load them

    • Very true. What I don’t understand is that with other programming languages, well-written and well-performing code is important, whereas with SQL, the bar is often lower.
      Maybe because most people who write SQL do it as a “second language”.

  2. I know this is about database developers but I want to see the same discussion for DBAs.

    When you’re running a large scale operation how do you determine SLAs to pass onto your internal customers about what performance they can expect.

    I mean it’s not as simple as benchmarking your server. You might have 50 servers running 10 apps on each; today you divide performance by 10 and say that’s your SLA, and tomorrow the business wants you to consolidate another 10 apps there because the server is under utilised.

    It’s the same problem in a way but at a higher level of scale.

    • Interesting subject that I’ve encountered at times as a consultant.

      How to define, measure and follow up on SLAs is a bit outside my comfort zone, but it makes sense that you can’t just pile on more apps and expect the same performance.

Let me hear your thoughts!

This site uses Akismet to reduce spam. Learn how your comment data is processed.