Book Chat: Daring Greatly

This is a bit outside of the normal material I write about here, but I felt that it was something that others might appreciate as well. One of my wife’s classmates gave her a copy of Daring Greatly and said that it was part of his inspiration to be in their doctoral program. I’ve always read broadly and tried to find a way to integrate that into my life. I saw this book and wasn’t sure if it was the sort of thing for me, but I decided to read it to find out. I started reading it and maybe a third of the way through I thought to myself, “I’ve read this book before but with the word vulnerability replaced with with word authenticity.” But I kept reading and eventually something started resonating with me.

What resonated was the idea that you need to put yourself out there, give more of yourself to the situation, and say those things you are thinking. You have to do that even if it’s not be easy, because it is important and is part of what separates good from great. The willingness to say something that puts yourself in a vulnerable position to open yourself to those around you takes a bigger commitment than most people are willing to make. People are willing to say the easy thing but not the hard things that require them to push against the structure around them. It’s like the Emperor’s New Clothes, everyone sees something but the incentive structure is put together so they don’t recognize the problem. This book is all about how to structure your own thoughts to be able to push against the structure around you.

There are portions of the book written in an abstract sense and some others that are much more specific. The specific sections contain several manifestos to describe how leaders or parents should behave around those they are responsible for. It puts out there that you may not be perfect but that you strive to be better and hope that everyone else engages with you in trying to be better. Each of the manifestos describes how the person in that situation can open up to those subordinate to them to truly embrace the position they are in.

I’m not sure if this is that profound, or if it found me at the point in my life that had me open to what it was saying. However I felt moved by this. It made me feel that I should push outwards and express my opinions further. I had been expressing myself in some domains, however it is hard to put yourself out there in all areas all of the time. Sometimes you just want to wait and let things happen, but sometimes you need to make things happen. Not in the doing sense but in the living sense, where you can’t just wait for something to happen but need something to progress the situation.

Advertisements

Mongo Play Evolutions

I ran into an odd situation with some Play Framework evolutions for MongoDB and hope to save the next person in this situation some time. I got two messages from it that I wasn’t really expecting, the first was “!!! WARNING! This script contains DOWNS evolutions that are likely destructives” and the second was seemingly more helpful “mongev: Run with -Dmongodb.evolution.applyProdEvolutions=true and -Dmongodb.evolution.applyDownEvolutions=true if you want to run them automatically, including downs (be careful, especially if your down evolutions drop existing data)” The big issue I had was I couldn’t tell why it felt it should be running the downs portion of the evolution at all.

Some digging in the logs showed it wanted to run the down for evolution 71 and the up for evolution 71 as well. This was when I got really confused, why would it attempt to run both the down of the evolution and the up for the same evolution? I spent a while digging through the code looking at how it decided what evolutions to run and it turns out to be comparing the saved content of the evolution that was run at some point in the past with the current content of the evolution. So it recognized that the current 71 evolution was different from the previous evolution 71 and was attempting to fix this by running the saved down evolution and then the new up evolution.

The environment was setup to not run down evolutions since that usually meant that you had screwed up somewhere. We had accidentally deployed a snapshot version to the environment a while back, which is where the unexpected behavior came from. We ended up fixing the problem by breaking the evolution into two different evolutions so there was no down to be run.

Book Chat:The Mikado Method

The Mikado Method describes a way to discover how to accomplish a particular refactoring. The method itself asks that you first attempt to do what you want “naively” and identifying the problems with that approach. Then you roll the code base back to the original state in order to tackle one of those problems and iterate on the process until you can begin to resolve the problems in a bottom-up fashion, resulting in multiple small refactorings rather than one big one. This strategy means you can merge or push with the master branch more frequently since the codebase is regularly in a working state. This avoids the rabbit hole of making changes and more changes and never being sure how close you are to having compiling software with a passing test suite again.

The actual description of the technique and examples is only about 60 pages of the roughly 200 pages of the book; most of the rest is other tips and tricks for working on refactoring. There is also a rather long appendix on technical debt that I found expressed some ideas I had been thinking about recently; it describes four techniques for tackling the sources of technical debt.

The four techniques listed are absolve, resolve, solve, and dissolve. Absolve is essentially normalizing the practice and saying it is okay to do things this way. This would be something like lowering the automated test coverage necessary during a hard scheduled push. Resolve is reverting a change in the current processes and environments that had unintended negative effects. This would be something like getting rid of an internal bug bounty if it was being abused. Solving is changing the incentive schemes in order to put groups into alignment. For instance, having development teams on call in order to help align their incentives with the operations teams. Dissolving is the sort of radical solution that completely removes the friction between groups and makes the problem disappear completely. To continue with the previous example this would be a sort of devops culture where operations and developers are all on the same team and there is less distinction between the two. Each of these techniques could be applied to various means that create technical debt, or even to other sorts of problems.

The actual Mikado technique doesn’t seem book-worthy in the sense that it isn’t complex enough to warrant an entire book on it’s own. The other refactoring techniques weren’t anything particularly novel to people who are already familiar with Refactoring Legacy Code or other similar material. Overall it was a quick read and enjoyable but not the sort of thing I would be recommending to others strongly.

Theories of Technical Debt

There are a couple of different major causes of technical debt even on the best run projects.

  1. Schedule pressure
  2. Changed requirements
  3. Lack of understanding of the domain

You can choose to take on debt strategically to accommodate an aggressive schedule, you can accumulate debt from having things change, or you can collect debt from doing what appeared to be the right thing but turned out not to be once you learned more about the underlying situation. There are plenty of other reasons that technical debt can be acquired, but most of those can be avoided. Changing requirements can’t really be avoided; things can change, that’s the nature of life. Understanding of the domain is a tricky issue, since you can spend more time upfront to better understand the domain but you will still uncover new aspects as you go.

Schedule pressure is the most demanding of the three. You can consciously say that technical debt should be taken on by doing something the expedient way. You can also have implicit schedule pressure that pervades the decision making process. This sort of pervasive pressure causes people to value different things. If leadership discusses the schedule day in, day out, but doesn’t mention quality, it ends up lacking.

Technical debt is fundamentally a lack of quality; not in the defect sense but in the lack of craftsmanship sense. All of those 5,000 line classes got written by other engineers who were doing the best they could within the constraints of the environment at the time. But some engineers look at that and don’t want to touch it afraid of what it is and how hard it is to change. Some other engineers look at it and see a mountain to be climbed, or a wilderness to be civilized. The problem code is something to be taken and broken to your will. Each kind of engineer has a place in the development lifecycle.

If a company needs to hit a product window and is only full of the kind of engineers who see debt as a challenge to be dealt with they might not be able to make that tradeoff. If you only have the engineers who are concerned with maximum velocity but leave behind chaos in their wake you will have trouble as the codebase matures. Every engineer seems to have a range on the spectrum where they are most comfortable. Where in that range they land day to day seems to be controlled by the environment around them.

If you are surrounded by one side or the other you might lean towards that side of the range. The messages management sends out about the quality of software relative to the messages they send out about schedule of the project is another important factor. If they keep hammering home to get more done, people will find corners they think can get cut. What those corners are will differ between people, but they will take on debt in various corners of the codebase. This sort of debt is really insidious since it was never consciously decided on. If you decide that you will defer good error messages, or avoid building out an abstraction now, but you do it explicitly because of the schedule is the right business choice, then since the team discussed and decided to do it, they know as a whole that’s not the best technical solution but is the best business solution, whereas if someone just does it everyone else may not even be aware of the other options.

Quality Software

What makes a piece of quality software? Low current defect count is an obvious part of quality software. An intuitive and powerful interface is another obvious part; the interface should be easy for a beginner to start with, yet rich and expressive for an experienced user. These are both important to quality software from a product perspective, but quality on the engineering side is more complex.

On the engineering side, the most important part of quality software is that those who need to make changes can do so confidently. This is a multifaceted goal, with lots of components.

quality-software

You can accomplish each of these subgoals through multiple paths of action. You can get a well-tested application by doing TDD, ATDD, BDD, ad-hoc unit testing, or with enough time, just plain old manual testing. Organizations have all picked specific tools or practices to try to accomplish a particular intention and enshrined that particular practice as the right way to do things. The techniques give you practices but it is hard to say if you are achieving what they are designed to do without quantifying them somehow.

Most of these aspects are difficult to quantify if you’re trying to measure “quality.” Code coverage, while a quantifiable metric, isn’t the important part; if you cover 10,000 lines of models but skip 150 lines of serious business logic, the metric will say you’re doing great but you missed the important parts. The important part of code coverage is knowing that you got the important parts of the codebase. There is some interesting research going on into quantifying the readability of source code, but it acknowledges that the humans they used to score the samples did not regularly agree on what was readable. You can use Cyclomatic Complexity as a proxy against modularity, which can tell you that you successfully broke the problem into smaller pieces, but it doesn’t tell you anything about your ability to replace one without impacting others. Easy-to-build and easy-to-start-with can be quantified to a certain degree; if you give a new developer the codebase and related documentation, how long does it take them to get it running on their local machine. The problem with that metric is you don’t often bring in new developers to a project and new developers aren’t all created equal. I don’t think I’ve ever seen someone actively collect that metric in minutes and hours.

The more time I spend in this industry the more it seems most activity is designed to ensure that the system is functionally correct, since that is the first level of success. This second level of creating truly quality software seems to be underappreciated because it is hard to quantify and unless you spend your days with the code base you can’t appreciate the difference. I think this difference is what can make some tech companies unstoppable juggernauts in the face of competition and everything else. I suspect that a lot of the companies that are putting out new and innovative ideas on the edge of building higher quality systems have found ways to quantify the presence of quality, not just the lack of quality signified by downtime and defect counts.