Organizing Code

I’ve been arguing with myself about the proper way to split up some code that has related concerns. The code in question relates to fetching secrets and doing encryption. The domains are clearly related, but the libraries aren’t necessarily coupled. The encryption library needs secrets, but secrets are simple enough to pass across in an unstructured fashion.

As I mentioned before, we are integrating Vault into our stack. We are planning on using Vault to store secrets. We are also going to be using their Transit Encryption Engine to do Envelope Encryption. The work to set up the Envelope Encryption requires a real relationship between the encryption code and Vault.

There are a couple of options for how to structure all of this. There are also questions of binary compatibility with the existing artifacts, but that’s bigger than this post. The obvious components are configuring and authenticating the connection to Vault, the code to fetch and manage secrets, the API for consuming secrets, and the code to do encryption. I’m going to end up with three or four binaries, encryption, secrets, secret API, and maybe a separate Vault client.

organizingCode

 

That would be the obvious solution, but the question of what the Vault client exposes is complex, given that the APIs being used by the encryption and secrets are very different. It could expose a fairly general API that is essentially for making REST calls and leaves parsing the responses to the two libraries, which isn’t ideal. The Vault client could be a toolkit for building a client instead of a full client. That would allow the security concerns to be encapsulated in the toolkit, but allow each library to build their own query components.

Since the authentication portion of the toolkit would get exposed through the public APIs of the encryption and secret libraries, that feels like a messy API to me and I’d like to do better. There seems like there should be an API where the authentication concerns are entirely wrapped up into the client toolkit. I could use configuration options to avoid exposing any actual types, but that’s just hiding the problem behind a bunch of strings and makes the options less self-documenting.

Like most design concerns there isn’t a real right answer. There are multiple different concerns at odds with each other. In this case you have code duplication vs encapsulation vs discoverable APIs. In this case code duplication and encapsulation are going to win out over discoverable APIs since the configuration should be set once and then never really changed, as opposed to the other concerns which can contain the long term maintenance costs of the library since it will likely be used for a good while to come.

Advertisements

Book Chat: Extreme Programming Explained

Extreme programming (XP) is an alternative software development methodology that would be described as an agile methodology. It’s a competitor to scrum, but more focused on the developer experience, less prescriptive of specific organizational practices, and more prescriptive of technical practices. I was familiar with the concepts of XP and recently picked up the second edition of Extreme Programming Explained. This new edition refined some of the technical practices about deployment since tools now exist for even more rapid deployment than what was initially conceived.

The build time practice is interesting, the idea being that a continuous integration build/test cycle should take ten minutes. While you could make the build faster than 10 minutes, keeping it a bit longer generates a decent mental break to allow someone to get a cup of coffee or get up and stretch. Whereas, if it’s slower than that, there is a tendency to move onto a different task and you can lose context on the old task and the new task. It matches with my experience; although I hadn’t been able to articulate the solution, I had seen the problem.

The overall methodology seems solid, however it doesn’t market itself to the whole business the way scrum does which seems to have impacted the adoption of the methodology as a whole. The practices suggested are all pretty straight forward:

  • colocate the team,
  • construct a team with all necessary skills on the team,
  • have visible progress locators,
  • work when you can really concentrate on it,
  • pair program,
  • user stories,
  • a weekly cycle,
  • a larger quarterly cycle,
  • slack,
  • the above build time practice,
  • continuous integration,
  • test first programming, and
  • incremental design.

Most modern software teams would be in favor of most, if not all, of these practices. Some of the practices are outside the control of the team and would need significant management support, but most are things the team can control.

I don’t think that the differences between this and other agile project management methodologies are that significant. The biggest difference with scrum I can see would be that scrum has fixed reflection periods whereas XP has continuous reflection with impromptu kaizen events. I think that this difference between XP and scrum would allow you to differentiate yourself from all of the scrum implementations that are out there but never finished. I don’t think that the book adds much to my understanding of software engineering, however it’s an excellent selection of software engineering practices. If you’re looking for a different perspective on agile methodologies this would be an interesting read.

BadSSL.com

I ran across badssl.com recently, and needed to share. The basic idea of the site is that it hosts a number of subdomains with all sorts of variants of SSL certificates. The example certificates cover the whole range of things that can go wrong with a certificate, including expiration, self signed certs, revoked certificates, and certificates for the wrong host. It also checks the strength of cryptography being used and has certificates specifying multiple different kinds of encryption to be tested against. This is all so you can see that your browser is securing you properly.

There is a more interesting use case however. When you go over to the associated github repo there are instructions for booting up the site locally inside a docker container so you can test your code against it as part of your automated test suite to test all sorts of other networking code outside of a browser. The container hosting a separate copy of the site avoids putting your integration tests in a path where they reach out to the public internet for resources. Having your integration tests work with public resources on the internet isn’t a good practice for a number of reasons, such as the time it takes to round trip, the dependency on someone else’s infrastructure for your processes, and just being inconsiderate of someone else’s resources. But, this container lets you avoid all of the work associated with defining what certificates are needed, generating the various certificates, and installing all of certificates.

The test case we used the certificates for didn’t turn up any bugs, but it did make us confident in the implementation. This confidence helped us move along more quickly and be sure we were appropriately securing the connections.

Book Chat: Working Effectively With Unit Tests

Working Effectively With Unit Tests is a discussion not of when to unit test or how to unit test, but how to know when you’ve done it well. It works backwards from the idea that tests should be Descriptive And Meaningful Phrases(DAMP) as opposed to the traditional software pneumonic Don’t Repeat Yourself (DRY). By allowing some duplication in tests and focusing on the clear intention of what is to be accomplished you get tests that are easier to read and tests that are more focused on the object under test rather than the collaborators of the test.

The style being described forces out a lot of the elaborate mock setups common in most first attempts at unit testing. This is a definite good intention, however like most resources, I feel it comes short at describing a means to actually get rid of these sorts of problems in real applications, as opposed to toy applications in books and articles. The ideas it provides do work towards those ends admirably. To me, the ideas presented seem to drive towards a more functional style of programming; methods were getting more arguments which made the methods more flexible, and the objects they lived on were less prone to carrying around extraneous state. The book didn’t discuss this in functional programming terms, but sort of implied that was a goal around the edges.

Compared to some of the other books on unit testing I’ve read, this felt more concise, and it was definitely less focused on a specific framework for doing testing. It feels written for someone who has been doing unit testing for a while and has not been getting value from the activity, or has been having maintainability problems with tests. For those audiences it seems like it is a good perspective towards trying to get out of their problems. For people new to unit testing, it may be a little to broad in what you should do and not prescriptive enough.

Book Chat: Perspectives on Data Science for Software Engineering

Perspectives on Data Science for Software Engineering is a collection of short research papers on using the tools provided by data science to do research into software engineering. It isn’t about the concepts of data science for software engineers as I thought it would be when I initially picked it up. This difference had me put it down the first time I picked it up to read it, but when I came back around to it I found myself interested not in the data science aspect of it, but the software engineering research aspect.

While none of the individual papers was something I read and immediately knew how I could apply in my own practice, the overall package helped me feel positive for progress in software engineering. Outside of language design, it sometimes feels like most of the software engineering learning we’ve done going as far back as the 70’s and 80’s hasn’t been applied in practice. I think part of the difference is because the research is disconnected from the way software is built in the wild. The research is hyper-specific, (e.g., focusing on a particular kind of software in a single language) or defines problems but not solutions (e.g., the work on code quality metrics). The research isn’t wrong, but it’s missing a step about how to apply the work to what you’re doing.

The only piece in here that I saw and felt had an immediate connection to what I was doing was the piece on bug clustering. That showed that the more bugs a file had the more likely it was to have more bugs in future iterations. This seems like it may lend some credence to the idea of rewriting a piece of code that has quality problems to effectively blank the slate and start over again.

Overall the book was intellectually stimulating but has no real practical usage for what I do or what I feel would be the average software developer. If your role straddles the practical and academic worlds then this may have more value to you.

Book Chat: The Architecture of Open Source Applications Volume 2

The Architecture of Open Source Applications Volume 2 has writeups describing the internal structure and evolution of nearly two dozen different open source projects, ranging from tools to web servers to web services. This is different from volume one, which didn’t have any web service-like software, which is what I build day to day. It is interesting to see the differences between what I’m doing and how something like MediaWiki powers Wikipedia.

Since each section has a different author the book doesn’t have a consistent feel to it or even a consistent organization to the sections on each application. It does however give space to allow some sections to spend a lot of time discussing the past of the project to explain how it evolved to the current situation. If looked at from the perspective of a finished product some choices don’t make sense, but the space to explore the history shows that each individual choice was a reasonable response to the challenges being engaged with at the time. The history of MediaWiki is very important to the current architecture whereas something like SQLAlchemy(a Python ORM) has evolved more around how it adds new modules to enable different databases and their specific idiosyncrasies.

I found the lessons learned that are provided with some of the projects to be the best part of the book. They described the experience of working with codebases over the truly long term. Most codebases I work on are a couple of years old while most of these were over 10 years old as of the writing of the book, and are more than 15 years old now. Seeing an application evolve over longer time periods can truly help validate architectural decisions.

Overall I found it an interesting read, but it treads a fine line between giving you enough context on the application to understand the architecture, and giving you so much context that the majority of the section is on the “what” of the application. I felt that a lot of the chapters dealt too much with the “what” of the application. Some of the systems are also very niche things where it’s not clear how the architecture choices would be applicable to designing other things in the future, because nobody would really start a new application in the style. If you have an interest in any of the applications listed check out the site and see the section there, and buy a copy to support their endeavours if you find it interesting.

Book Chat: Scala With Cats

Scala with Cats is a free ebook put together to help introduce the Cats library and it’s programming style to developers. It is targeted to Scala developers with about a year of experience with the language, but if you were using the language in a very Java-like way you may not be prepared for this book. That caveat aside, it brings an accessible introduction to the library and it’s style of programming.

I can’t go back and have read this before having read Functional Programming in Scala but it seems like either order would work fine. They both talk about the same basic concepts around purely functional programming. They come at it from two different perspectives; Scala with Cats is about how the category theory-inspired structures in the library can be used to solve problems, whereas Functional Programming in Scala is leading you towards those same category theory-inspired structures but getting you to find the patterns yourself.

I really appreciated the last set of exercises in Scala with Cats where it had you implement this concept. It starts out out as a fully concrete class then converting it into more and more generic structures. First, by adding some type classes to become generic to the specific types. Then, by abstracting over the intermediate data structure and converted the structure to its own type class. Finally, by abstracting over the data structure even further by replacing it with another type class.

I think this style of programming has some definitive pros. The idea behind the vocabulary is good, even if the terms chosen obscure some of the intent. The extensive usage of type classes adds an additional layer of polymorphism that lets a library author abstract over portions of the implementation to make it future-proof. The Scala implementation of type classes makes this feel awkward at points since the imports around implicit instances are less obvious around what is happening. I feel like I need to spend some time with a real application written in this style to try to see what the negatives are to working with it. I can see the issues with learning to work in this style, but I’m uncertain about what the negatives are once you’ve gotten used to this style.

Type Aliases in Scala

I had an interesting conversation recently with one of the junior engineers on my team about when to use type aliases. Normally when I get asked for advice I’ve thought about the topic or at least have a rule of thumb I use for myself. Here all I could manage to express was that I don’t use type aliases but not for any particular reason. I felt I should do better than that and promised to get some better advice and see what we can do with that.

Having thought it through a little, here’s the guidance I gave. You can use type aliases to do type refinement to constrain an existing type. So, you could constrain that integer to only positive integers. Instead of assuming that some arbitrary integer is positive, or checking it in multiple places you can push that check to the edge of your logic. This gives you better compile time checks that your logic is correct and that error conditions have been handled.

They can also be used to attach a name to a complex type. So instead of having an

Either[List[Error], Validated[BusinessObject]]

being repeated through the codebase you can name it something more constructive to the case. This also allows hiding some of the complexities of a given type. So if, for example, you had a function that returns multiply nested functions itself like

String => (Int, Boolean) => Foo[T] => Boolean

it can wrap all that up into a meaningful name.

None of this is really a good rule for a beginner but it feels like it wraps up the two major use cases that I was able to find. I ended up going back to the engineer that prompted the question, with “use type aliases when it makes things clearer and is used consistently.” Neither of us were really happy with that idea. There are clearly more use cases that make sense but we weren’t able to articulate them. We’re both going to try it in some code and come back around to the discussion later and see where that gets us.

Scala Varargs and Partial Functions

I ran into a piece of code recently that looked like

foo(bar,
   {case item:AType => …}
   {case item:AnotherType => …}
{case item:YetAnotherType => …}
// 10 more cases removed for simplicity
)

I was immediately intrigued because that was a very odd construction and I was confused why someone would write a function accepting this many different partial functions and what they were up to. I went to look at the signature and found the below.

def foo(bar: ADTRoot, conditions: PartialFunction[ADTRoot, Map[String, Any]]*):  Map[String, Any]

It was using the partial functions to pick items from the algebraic data type (ADT) and merge them into the map. More interestingly it used the the ability of the partial function to identify if it can operate on the type that bar happened to be. Overall it was interesting combination of language features to create a unique solution.

Part of is that the ADT was missing some abstractions that should have been there to make this sort of work easier, but even then we would have had three cases not a dozen. I’m not sure if this pattern is a generalizable solution or even desirable if it is, but it got me thinking about creative ways to combine language features provided by Scala.

Book Chat: Functional Programming in Scala

I had been meaning to get a copy of this for a while, then I saw one of the authors, Rúnar Bjarnason, at NEScala 2017 giving a talk on adjunctions. Before seeing this talk I had been trying to wrap my head around a lot of the Category Theory underpinning functional programming, and I thought I had been making progress. Seeing the talk made me recognize two facts. First, there was a long way for me togo. Second, there were a lot of other people who also only sort of got it and were all there working at understanding the material. At the associated unconference he gave a second talk which was much more accessible than the linked one. Sadly there is no recording, but I started to really feel like I got it. Talking with some of the other attendees at the conference they all talked about Functional Programming in Scala in an awe inspiring tone about how it helped them really get functional programming, and the associated category theory.

The book is accessible to someone with minimal background in this, so I came in a somewhat overqualified for the first part but settled in nicely for the remaining three parts. It’s not a textbook, but it does come with a variety of exercises and an associated repo with stubs for the questions and answers to the exercises. There is also a companion pdf with chapter notes and hints about how to approach some of the exercises that can help you get moving in the right direction if stuck.

Doing all of the exercises while reading the book is time consuming. Sometimes I would go read about a half a page and do the associated exercises and spend more than an hour at it. The entire exercise was mentally stimulating regardless of the time I committed to the exercise, but it was draining. Some of the exercises were even converted to have a web-based format that is more like unit testing at Scala Exercises.

I made sure I finished the book before going back to NEScala this year. Rúnar was there again, and gave more or less the same category theory talk as the year before, but this time around I got most of what was going on in the first half of the talk. In fact, I was so pleased with myself, that I missed a key point in the middle when I realized how much of the talk I was successfully following. I ended up talking with one of the organizers who indicated he encouraged Runar to give this same talk every year since it is so helpful to get everyone an understanding of the theoretical underpinnings of why all this works.

This book finally got me to understand the underlying ideas of how this works as I built the infrastructure for principled functional programming. It leaned into the complexity and worked through it whereas other books (like Functional Programming in Java) tried to avoid the complexity and focus on the what not the why. This was the single best thing I did to learn this information.