Where are all the consultants?

I spend a lot of time engaging with programming related content, online and in person. Most of this content is created by people who describe themselves as “consultants” of some variety. Up until recently I had never worked with any consultants of this variety anywhere. I had wondered, where are all the consultants? Recently at work the floodgates opened and a huge wave of consultants appeared to help a couple of teams hit their objectives. I’m talking 40-50 consultants against an existing total engineering team of ~250.

Watching this from the outside was interesting since it seemed like our people spent a lot of time trying to get the consultants up to speed on everything that is going on. This was exacerbated as the consultants did not get access to everything a normal employee would, most notably the wiki; that meant that large quantities of the documentation that would normally just be linked to a new employee had to be exported and therefore couldn’t easily be contributed back to either. There were also timezone issues since many of the consultants were in eastern Europe , which resulted in them having a limited access window to interact with anyone on the US east coast and no reasonable time for them to interact with those on the US west coast. The remote only contractor presence was interesting given our unwillingness to start full time employees as remote. Overall the teams that picked up the consultants seemed to be able to eventually get around the obstacles and get the consultants contributing.

All of this was of idle curiosity as to the way the rest of the organization was run until the team I was on was slated to pick up oversight of two new consultants. Fortunately by the time we had gotten there most of the immediate logistical problems had been solved, and the majority of the basic onboarding documentation had been extracted from the wiki and put into a google drive the consultants were able to see. We also had the advantage of picking up US based consultants so the time zone issues weren’t an issue. Overall both consultants are very sharp, and experienced in the kind technologies we use. But, we have them for three months to start with, so we get the whole onboarding overhead but only three months to get the return on investment that comes from it.

This raises three questions in my mind. First, when the consultants are done how much more did we get done over what we could have gotten rather than just doing it ourselves?  Second, isn’t the whole process just going to repeat itself with the next big set of deliverables for engineering? Third, is the content being generated by these consultants I’m seeing their reaction to other companies that have already gotten themselves into trouble? The first question seems like it should be net positive, at least for the consultants my team has, but I think part of that is because the kinks in the system were worked out by others who went first. I feel like the second question is much more intriguing. It seems like the initial need for the consultants was due to a failure of organic growth in engineering. So the resources we put into finding and vetting the consultants weren’t being put into finding and vetting employees. Therefore, it seems like while we may have gotten more engineering work done in the short-term, HR/management resources were spread thinner in terms of doing the long term recruiting. Even though the consultants were doing great work, it feels our longer term ambitions may have been sacrificed to meet present obligations. The third question is much broader. If the advice being poured out into the internet and being delivered at conference talks and similar is the result of consultants looking at lots of organizations that are already dysfunctional, then it’s possible that it’s biased toward bringing bad to passable versus aiming for great. It strikes me as being like trying to form a psychological theory using just a prison population because that’s the psychologist happens to treat everyday. Since having this thought I haven’t been able to see any common architectural or management mantras that are clearly thought up based on these sorts of situations. Maybe Tolstoy was right after all: Happy families are all alike, every unhappy family is unhappy in its own way.

Advertisements

Book Chat: The Architecture of Open Source Applications Volume 2

The Architecture of Open Source Applications Volume 2 has writeups describing the internal structure and evolution of nearly two dozen different open source projects, ranging from tools to web servers to web services. This is different from volume one, which didn’t have any web service-like software, which is what I build day to day. It is interesting to see the differences between what I’m doing and how something like MediaWiki powers Wikipedia.

Since each section has a different author the book doesn’t have a consistent feel to it or even a consistent organization to the sections on each application. It does however give space to allow some sections to spend a lot of time discussing the past of the project to explain how it evolved to the current situation. If looked at from the perspective of a finished product some choices don’t make sense, but the space to explore the history shows that each individual choice was a reasonable response to the challenges being engaged with at the time. The history of MediaWiki is very important to the current architecture whereas something like SQLAlchemy(a Python ORM) has evolved more around how it adds new modules to enable different databases and their specific idiosyncrasies.

I found the lessons learned that are provided with some of the projects to be the best part of the book. They described the experience of working with codebases over the truly long term. Most codebases I work on are a couple of years old while most of these were over 10 years old as of the writing of the book, and are more than 15 years old now. Seeing an application evolve over longer time periods can truly help validate architectural decisions.

Overall I found it an interesting read, but it treads a fine line between giving you enough context on the application to understand the architecture, and giving you so much context that the majority of the section is on the “what” of the application. I felt that a lot of the chapters dealt too much with the “what” of the application. Some of the systems are also very niche things where it’s not clear how the architecture choices would be applicable to designing other things in the future, because nobody would really start a new application in the style. If you have an interest in any of the applications listed check out the site and see the section there, and buy a copy to support their endeavours if you find it interesting.

Book Chat: Learn You a Haskell for Great Good

Haskell was the white whale of functional programming in my mind, something that is the definitive form of functional programming but with such a steep learning curve that it put off all but the most determined students. I had been recommended Learn You a Haskell for Great Good a while ago but kept putting it off because of the intimidating nature of the material. I eventually had a big block of time where I was going to be home and didn’t have many responsibilities so I figured this would be a great opportunity to take a crack at it.

I sat down with it with an expectation that it would be mentally taxing like Functional Programming in Scala was, however having put in the work already reading that and Scala with Cats I was way ahead of the curve. While the Haskell syntax isn’t exactly friendly to beginners I understood most of the concepts; type classes, monads, monoids, comprehensions, recursion, higher order functions, etc. My overall expectation of the difficulty of the language was unfounded. Conceptually it works cleanly, however, coming from a C style language background the syntax is off putting. Added to the basic syntax issues most of the operators being used do give it an aura of inscrutability, especially being difficult to search as they are. I did find this PDF that named most of them which helped me look for additional resources about some of them.

The book explained some of the oddities around some of the stranger pieces of Haskell I had seen before. Specifically the monad type class not also being applicatives, it’s a historical quirk that monads were introduced first and they didn’t want to break backwards compatibility. The other fact that I had not fully appreciated Haskell dates from 1990 which excuses a lot of the decisions about things like function names with letters elided for brevity.

The other differentiating fact about the book is that it tries to bring some humor, rather than being a strictly dry treatment of the material. The humor made me feel a stronger connection with the author and material. A stupid pun as a section header worked for me and provided a little bit of mental break that helped me keep my overall focus while reading it.

Property Based Testing

When I moved into Scala programming I hoped to find out more about the world of functional programming first hand. The style of Scala everyone uses at work is more functional than anything else I had worked with before, but it wasn’t all the way towards functional since it uses exceptions and an object oriented architecture. I had been expecting more monads and such, however there was an idea from functional programming I did get an exposure to for the first time: property based testing.

Property based testing is a style of writing unit tests where you define rules that always hold for the function being tested. It pairs well with functional programming since the immutable nature of data in functional programming makes writing the rules significantly easier. It is a simple concept but there is a lot of nuance around constructing the data to be used and the rules. The example rules in this post use the ∀ symbol which means ‘for all.’

The rules are interesting since you effectively need to be able to describe a set of rules that covers a function’s behavior and define the output in terms of some other logic. Consider this set of rules for doing string concatenation:

∀ strings A and B: (A+B).startsWith(A)

∀ strings A and B: (A+B).endsWith(B)

∀ strings A and B: (A+B).length == A.length + B.length

This set of rules assumes you have these three other working functions. Then when you go to define the rules for those function you can’t assume that + works. So you end up with the next set of rules for startsWith.

∀ strings A: for(int i=0; i< A.length; i++)

 A.startsWith(A.dropRight(i))

∀ strings A, characters B: for(int i=0; i< A.length; i++)

 Not A.startsWith(A.dropRight(i) + B)

 

This then leads you to the rules for dropRight

∀ strings A, int B such that B < A.length: A.dropRight(B).length + B = A.length

∀ strings A, int B such that B < A.length: C = A.dropRight(B)

      for(int i=0; i< C.length; i++)

A[i] == C[i]

This eventually gets you back to a closed set of rules, however you effectively rewrote startsWith in the rules for dropRight. You can reorganize the rules but for most non-trivial systems you end up with asort of looping logic in the rules defining how everything works. You can break the loop like the above example where you essentially reimplement a small portion of logic in the tests, you can set up some traditional example based unit tests to cover the area where the property based tests have looped back, or you can define multiple sets of rules for the same function to increase your confidence in the solution. For the last option, in this example, we would define an additional rule for +.

∀ strings A and B: (A+B).splitAt(A.length) == (A, B)

This rule tackles the same concept from a different direction. It adds an additional layer of confidence that the system is working as intended.

The nuance in creating the data comes from the distribution of data among a data type. Generating random integers is pretty good, but some integers are more likely than others to generate interesting data. Numbers like min, max -1, 0 and 1 are all more likely to trigger a case than whatever numbers you pick at random. Similarly for strings the empty string and odd symbols are more likely to produce interesting cases. Most property based testing frameworks allow you to define interesting cases for your own more complex data types.

I’ve been doing property based testing using the ScalaCheck library which is derived from the Haskell QuickCheck. There are QuickCheck derivative testing frameworks available for most major programming languages. If this sort of logic based testing appeals to your sensibilities give it a try and see if you can craft properties to augment or even replace your existing test suites.

Book Chat: Effective DevOps

Effective DevOps is about the culture of the DevOps movement. The technical practices that today coincide with DevOps are the result of the culture practices, not the cause. The cause is an underlying culture that is safe, and respectful to those in it, which truly empowers the team to try things to improve the way that work is done and leads to the technical practices associated with DevOps. The book is overall written more from a management perspective than an individual contributor perspective. The book is centered around the four pillars of effective DevOps: Collaboration, Affinity, Tools, and Scaling.

Collaboration is the normal sort of mentoring, and workflow information that would be familiar to most agile or lean practitioners. The Affinity pillar though builds on top of Collaboration with the idea that it takes time and work to forge a group of individuals into a team and explores the requirements to build those bonds. These two pillars lead i7nto the Scaling pillar nicely since, while you can eliminate waste and automate things, at the end of the day the biggest scaling maneuver is in hiring. Hiring renews the importance of the Collaboration and Affinity aspects of this since as you bring new people into the system you must fully integrate them.

The section on the Tools pillar is written in a tool agnostic fashion, wherein it describes categories of tools commonly used to the DevOps. That makes it much more interesting than any other book that is tied to a particular set of technologies since it is focused on the concept not the implementation.

Overall it’s an interesting read. The focus on the social aspects of what’s going on makes it less useful in my day to day activities, but the longer I do this job the more I think that the technical aspect is essentially table stakes to doing the job and everything else is where more long term growth come from.

Book Chat: Scala With Cats

Scala with Cats is a free ebook put together to help introduce the Cats library and it’s programming style to developers. It is targeted to Scala developers with about a year of experience with the language, but if you were using the language in a very Java-like way you may not be prepared for this book. That caveat aside, it brings an accessible introduction to the library and it’s style of programming.

I can’t go back and have read this before having read Functional Programming in Scala but it seems like either order would work fine. They both talk about the same basic concepts around purely functional programming. They come at it from two different perspectives; Scala with Cats is about how the category theory-inspired structures in the library can be used to solve problems, whereas Functional Programming in Scala is leading you towards those same category theory-inspired structures but getting you to find the patterns yourself.

I really appreciated the last set of exercises in Scala with Cats where it had you implement this concept. It starts out out as a fully concrete class then converting it into more and more generic structures. First, by adding some type classes to become generic to the specific types. Then, by abstracting over the intermediate data structure and converted the structure to its own type class. Finally, by abstracting over the data structure even further by replacing it with another type class.

I think this style of programming has some definitive pros. The idea behind the vocabulary is good, even if the terms chosen obscure some of the intent. The extensive usage of type classes adds an additional layer of polymorphism that lets a library author abstract over portions of the implementation to make it future-proof. The Scala implementation of type classes makes this feel awkward at points since the imports around implicit instances are less obvious around what is happening. I feel like I need to spend some time with a real application written in this style to try to see what the negatives are to working with it. I can see the issues with learning to work in this style, but I’m uncertain about what the negatives are once you’ve gotten used to this style.

Type Aliases in Scala

I had an interesting conversation recently with one of the junior engineers on my team about when to use type aliases. Normally when I get asked for advice I’ve thought about the topic or at least have a rule of thumb I use for myself. Here all I could manage to express was that I don’t use type aliases but not for any particular reason. I felt I should do better than that and promised to get some better advice and see what we can do with that.

Having thought it through a little, here’s the guidance I gave. You can use type aliases to do type refinement to constrain an existing type. So, you could constrain that integer to only positive integers. Instead of assuming that some arbitrary integer is positive, or checking it in multiple places you can push that check to the edge of your logic. This gives you better compile time checks that your logic is correct and that error conditions have been handled.

They can also be used to attach a name to a complex type. So instead of having an

Either[List[Error], Validated[BusinessObject]]

being repeated through the codebase you can name it something more constructive to the case. This also allows hiding some of the complexities of a given type. So if, for example, you had a function that returns multiply nested functions itself like

String => (Int, Boolean) => Foo[T] => Boolean

it can wrap all that up into a meaningful name.

None of this is really a good rule for a beginner but it feels like it wraps up the two major use cases that I was able to find. I ended up going back to the engineer that prompted the question, with “use type aliases when it makes things clearer and is used consistently.” Neither of us were really happy with that idea. There are clearly more use cases that make sense but we weren’t able to articulate them. We’re both going to try it in some code and come back around to the discussion later and see where that gets us.

Scala Varargs and Partial Functions

I ran into a piece of code recently that looked like

foo(bar,
   {case item:AType => …}
   {case item:AnotherType => …}
{case item:YetAnotherType => …}
// 10 more cases removed for simplicity
)

I was immediately intrigued because that was a very odd construction and I was confused why someone would write a function accepting this many different partial functions and what they were up to. I went to look at the signature and found the below.

def foo(bar: ADTRoot, conditions: PartialFunction[ADTRoot, Map[String, Any]]*):  Map[String, Any]

It was using the partial functions to pick items from the algebraic data type (ADT) and merge them into the map. More interestingly it used the the ability of the partial function to identify if it can operate on the type that bar happened to be. Overall it was interesting combination of language features to create a unique solution.

Part of is that the ADT was missing some abstractions that should have been there to make this sort of work easier, but even then we would have had three cases not a dozen. I’m not sure if this pattern is a generalizable solution or even desirable if it is, but it got me thinking about creative ways to combine language features provided by Scala.

Book Chat: Functional Programming in Scala

I had been meaning to get a copy of this for a while, then I saw one of the authors, Rúnar Bjarnason, at NEScala 2017 giving a talk on adjunctions. Before seeing this talk I had been trying to wrap my head around a lot of the Category Theory underpinning functional programming, and I thought I had been making progress. Seeing the talk made me recognize two facts. First, there was a long way for me togo. Second, there were a lot of other people who also only sort of got it and were all there working at understanding the material. At the associated unconference he gave a second talk which was much more accessible than the linked one. Sadly there is no recording, but I started to really feel like I got it. Talking with some of the other attendees at the conference they all talked about Functional Programming in Scala in an awe inspiring tone about how it helped them really get functional programming, and the associated category theory.

The book is accessible to someone with minimal background in this, so I came in a somewhat overqualified for the first part but settled in nicely for the remaining three parts. It’s not a textbook, but it does come with a variety of exercises and an associated repo with stubs for the questions and answers to the exercises. There is also a companion pdf with chapter notes and hints about how to approach some of the exercises that can help you get moving in the right direction if stuck.

Doing all of the exercises while reading the book is time consuming. Sometimes I would go read about a half a page and do the associated exercises and spend more than an hour at it. The entire exercise was mentally stimulating regardless of the time I committed to the exercise, but it was draining. Some of the exercises were even converted to have a web-based format that is more like unit testing at Scala Exercises.

I made sure I finished the book before going back to NEScala this year. Rúnar was there again, and gave more or less the same category theory talk as the year before, but this time around I got most of what was going on in the first half of the talk. In fact, I was so pleased with myself, that I missed a key point in the middle when I realized how much of the talk I was successfully following. I ended up talking with one of the organizers who indicated he encouraged Runar to give this same talk every year since it is so helpful to get everyone an understanding of the theoretical underpinnings of why all this works.

This book finally got me to understand the underlying ideas of how this works as I built the infrastructure for principled functional programming. It leaned into the complexity and worked through it whereas other books (like Functional Programming in Java) tried to avoid the complexity and focus on the what not the why. This was the single best thing I did to learn this information.

NEScala 2018

I attended NEScala 2018 recently for the second time and wanted to discuss the experiences I had there. It’s three loosely affiliated conferences across three different days. The first day was an unconference, the second was NEScala proper, and the third day was a Typelevel summit. I saw a bunch of great talks and met other practitioners who all brought a different perspective to the same sorts of problems I work with every day, as well as some people who have radically different problems.

There were a pair of talks from presenters at Twitter on how they deal with their monorepo using Scalafix and Pants. These were interesting solutions to the problems of the monorepo. During the transition to the microservices at my current job the code base has shattered into hundreds of repositories, which comes with problems, and you sometimes look and wonder if doing this another way would solve those problems. This was a clear reminder that there are problems on the other side that are just as difficult – just different.

The sections on Http4s(talk) and sttp were especially interesting to see the way they tackled HTTP servers and clients as purely functional structures. HTTP was in my mind difficult to describe purely functionally because it is all about referentially untransparent actions. Sttp was especially interesting because we had built a similar abstraction at work in the last year and seeing how others made different tradeoffs was interesting.

The big takeaway for me was that functional programming purity is a pragmatic thing. Functional programming is a tool to tackle complexity in software, but it’s not the only tool available to do that. There are ways to use local effects to wrap small bits of imperative code, but outside of the function where the imperative code lives, none of the callers can tell. You have a thin imperative wrapper on the outside and possibly little imperative nuggets on the inside that resolve performance issues and occasionally improve algorithmic readability, but the whole program retains the composability and readability of an immutable program.