Software Performance Economics

There is lots of information out there about how software performance impacts your ability to sell, but I haven’t seen much information about the cost of building higher performing software. What does it cost to go back and add a caching tier after the fact? What scales that cost? I would think it is related to the complexity of usage of whatever it was trying to cache, but I’d love to see hard data. Quantifying the relationship seems like it would be an interesting place for research. Today’s post is mostly on the performance side of the equation; next week I’ll be looking at the scalability side.

There is plenty of discussion in the community of the economics of software construction related to the cost of building quality in versus dealing with the fallout from low quality later. But there is little discussion of the economics of building a high performance solution as opposed to a low performance one. I think this is because low performance is perceived as a design defect and low quality as an implementation defect.

It seems like there are a couple of points at play. There is some fixed cost for building a feature in an application. It has some minimum level of performance, such as an n^2 algorithm with the number of customers in the system. When you have ten customers n^2 is fine; with ten thousand you might have a problem; at a million it is looking ugly. But if you started with n^2, it is now a question of what it costs to get from n^2 to n log n or even better. Premature optimization has a bad reputation, since it is assumed (for good reason) that a better performing solution would cost more, either in initial cost or in long-term maintenance.

If it takes 4 developers a month to build something that works at an n^2 performance level, would it take another week to improve it to run at n log n or would it take another month? What about if you wanted to go directly to n log n or better, how would that impact the results?

Imagine writing a sort implementation. Writing either a bubble sort implementation or a quicksort implementation is easy enough since they’re both out there and well know enough that you just go look it up by name. There are so many available options that – outside of school – most people will never write one. On the other hand, for a more complex sorting situation, maybe there aren’t so many published options to choose from at various levels of optimization. Maybe you need a sort that can take prior sorting actions into account, you’d end up with a selection sort that runs in n^2 time. I had to use this previously, and in the specific case I was working on, the runtime was fine since n was capped at 100, but for other circumstances this would be a severe bottleneck. If n was expected to be truly large what could I have done? There are additional concurrent sorting options that are out there which may have been able to apply more resources to the problem. If n was truly ridiculously large there are ways to sort data that don’t fit into memory too. But this example is still in a well-defined and well-researched problem domain.

The original solution to my specific case was a hideous optimization solution that approximated the correct answer. This approximation eventually caused trouble when an item that should have been included in the result wasn’t. Once we instead described it in terms of a sort, it afforded us the opportunity to rebuild the solution. We rebuilt in significantly less time than the original solution (two weeks versus four weeks), since we ended up with a simpler algorithm to do the same thing. Most of the time spent during the rewrite was on research trying to figure out how to solve the problem;during the initial implementation it was mostly spent on testing and bug fixing. This is the only time I’ve been involved in this sort of rewrite at the application level, which I suspect is a significant outlier.

This post is unfortunately more questions than answers. I don’t have the data to make any good conclusions on this front. But it is clear that building the most efficient implementation regardless of the scale of the business need isn’t the right choice. Understanding the situation is key to deciding what level of performance you need to hit. Not everything is as straightforward as sorting algorithms, but most interesting problems have some level of researched algorithms available for them.

Postscript for anyone that’s interested in the specific problem I was working on. We were trying to figure out what order events should be done in, but some of them couldn’t start until after a specific time, not some other set of events. So we needed to pick the first thing to occur than the second etc. Most sorts don’t work that way, but that how we ended up with a selection sort. It always selects the first element then the second element. So we could compute what time the next item would be starting at.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s