Job Openings

Lots of companies have job openings. My team has job openings; my friends have openings where they work; I get calls constantly from recruiters trying to entice me to leave my job, so lots of other people must have openings too. Most of these jobs never seem to actually get filled since we, as a profession, attrition people out at the same rate we hire. Maybe it’s the market – the most recent statistics I could find show the DC software market to be one of the hardest to fill positions in. One of the other interesting facts in that article is that if a job opening isn’t filled within the first month, then half the time it will take more than three months to fill. That raises the question, if you can’t fill a job why aren’t you figuring out why and changing?

I know at my job we mostly blame the “local job market” for our inability to fill positions in a timely way. I hear from friends who credit “high standards” with their inability to fill their positions. I’m not sure if it’s the market, or standards, or the way candidates are sourced. At any rate, I know I’m contacted by plenty of recruiters whose openings seem to fall into the following categories:

  1. Jobs where the recruiter can’t clearly articulate the responsibilities or duties;
  2. Jobs that are unrelated to my listed skills and technical background;  
  3. Jobs requiring relocation;
  4. Short-term contracts;
  5. Defense/Intelligence work that they can’t discuss the details of; and
  6. Jobs I would actually consider.

Number one, in my experience, is a surprisingly large portion of recruiter contacts – people who are looking for software developers but don’t know what technologies they’re working with or where the job is, yet somehow need to know what my required salary would be. Number two is people who are looking for a people to work on technology stacks and domains that aren’t things I’ve had experience with, or even things that are similar to those I’ve worked in. Which raises the question or why they are contacting me about the job. Three and four are pretty straightforward, some people might be interested in these sorts of things, but they are not for me. Number five isn’t something I’m interested in, I generally want to know more about what I’m getting into than what they can tell me.

I’m often willing to talk to recruiters to learn more, and after the initial chat, they’ve usually sent me a job description. All too often, I get some very bland job descriptions like this one I saw recently. It was basically 10 bullet points (I’m paraphrasing here):

  • BSCS or BSEE or more experience
  • Years of experience required
  • Communication ability
  • Knowledge of SDLC
  • Java or C#
  • Know a second programming language
  • Web development and services
  • SDLC again
  • QA processes
  • Requirements gathering

A couple of things jumped out at me from the description. There were no database technologies mentioned, even though the job logically requires them. Also, the reference to QA processes worried me, since the way it was phrased didn’t read for something like unit testing, but instead sounded like manual testing and test plans. And there were so many things missing – what does this team actually do, what makes this work interesting or different, what sort of environment do they work in. There was nothing there to really make the job description feel like a special opportunity.

On Stack Exchange podcast #69 (~14 minutes in) Joel Spolsky, Jay Hanlon, and David Fullerton were discussing how they felt that being able to put more jobs in front of developers would help them find better jobs they like more. By increasing the supply of employers they hope to improve the quality of the matches they make. This seems good, but it doesn’t help clear out all of the issues in the market, because the costs associated with evaluating a match are so high. From the beginning of the process, both sides are engaged in a complex signaling dance because of a lack of trust on both sides, and there are significant issues with congestion in the market. (This is a significant simplification of Who Gets What and Why by Alvin Roth which is a great explanation of matching markets as opposed to commodity markets.)

So back to the job description above. It doesn’t help people decide if this is a job that they would want, so it has a high cost associated with finding out more about it. It was getting looked at only because they were actively reaching out to people to direct them to it; it wasn’t generating interest on its own. It doesn’t signal well because it doesn’t create an emotional connection between you and the job as to why you would want to work with these people on these problems. It hits none of the emotional or logical points needed to really draw an applicant in.

If I could rewrite the posting I would include more specifics of what the team is doing and the opportunities that result. I’d fill in some more details about the technical stack. I would also try to build an emotional connection to the team. Show me the personality of the team, how they like to work, and what they think. If you can, try to highlight the specific positive elements to the identity of the company like growth or stability. But job postings and recruiters aren’t the only way to find talent; I would try to get the current employees out into the community to find people who are interested in the kind of things you are working on and bring them in techie to techie. That’s what I’d do to try and recruit me.

Advertisements

NuGet Conversion

At work, my team has been working toward converting a chunk of our application to be used as NuGet packages rather than direct project references. I wanted to document some of the issues we’ve been having with the conversion since it’s far more complex than most of the other ones I’ve seen documented. Hopefully others doing similar conversions will find it useful so they can avoid some of the pitfalls we have run into.

Let me start by describing the system as it is. We have roughly 4 million lines of C#, 450+ projects and 200 solutions. One of the solutions is supposed to represent everything; the others represent different vertical slices of the overall application. This produces ~300 console applications, ~50 web endpoints, a couple of old winforms applications and some COM components. This is worked on by roughly 20 different teams, spread across 5 or 6 different locations. It also has, like many complex software applications, a large number of little pieces that are still used but nobody is looking at day to day.

So the thing that jumps out from the previous paragraph is the “supposed to.” We found a lot of things that weren’t in that solution when we started our NuGet conversion, so we were just surprised we hadn’t broken them at some point. It’s also just a lot of code to work with as one unit, compiling it all together and running the minimal subset of tests locally precommit is highly onerous and takes about 4 hours.

The Initial Plan

  1. Convert the library at the base of the dependency tree to have all of its dependencies as NuGet packages.
  2. Lock the library in SVN, and migrate to git.
  3. Set up the CI system to build and publish the library as a package.
  4. Repeat steps 1-3 for other libraries that have a dependency on the initial library.
  5. Let other teams convert their code at their convenience to consume the package rather than the version left in SVN.

Our initial concerns with the package change were that we did not want to break any of the code that was dependent on the libraries or have to go and find everything that referenced the libraries and make changes to those directly. This is how the git migration and the NuGet package work got intermingled. We couldn’t really migrate to git with the repo as is, since the repo has 400,000 revisions with a lot of medium sized binary file churn. But, we figured commingling the git migration limited risk and let people move to the package consumption model at their speed, which also might help find chunks of the system that nobody is currently thinking about.

Where are we so far? After about two man-weeks we’re most of the way through step one, things have not worked out quite the way we wanted so far. In converting the existing dependencies over to NuGet we ran into several issues.

First, NuGet was by default restoring files to a location relative to the solution file, but looking for them using paths relative to the project, which was playing havoc with the different vertical solutions. NuGet expects a couple of different normal-styled project layouts, and we weren’t using them. The really annoying part of this was that if you built the solutions in particular order, the libraries would all get restored in the correct place, and that order happened to be the bottom order we tested it to the start with. We resolved this by adding a change to the NuGet config file to have all of the restored packages go to the same place, so the relative paths wouldn’t change in a project depending on which solution it was currently part of. This had another advantage in that you wouldn’t end up with dozens of copies of the same packages strewn all over your dev environment.

The next issue was that multiple teams had written custom build scripts that this broke since we were using automatic/command line package restore [1]. Since we didn’t have access to change those custom build scripts we had reverted our changes and stopped to regroup. If we switched to MSBuild-Integrated package restore, that would fix the build scripts, but would require manual intervention in every solution to set up, adding ~600 MB of binaries to the svn repo. Sticking with the the initial approach would result in an ongoing stream of problems for all of the other related teams with custom build processes set up, especially since it was unclear who had what set up. We ended up switching to the MSBuild-Integrated, mostly due to the overriding need for stability. Technically, we thought it was the lesser choice, but other concerns ended up being primary.

The last issue so far is due to our use of ILMerge to merge output assemblies for some of the console applications. Some of the assemblies that got converted to NuGet package had previously been distributed in the GAC to target machines, and one of those assemblies can’t be used with the ILMerge due to issues with what happens when it gets repackaged. The common alternative to ILMerge doesn’t play nice with NuGet since it requires adding assemblies to projects in a different way. We haven’t resolved this issue fully yet. We added the versions of the assemblies causing issues with ILMerge to the GAC to resolve the problem in the short term. In the long term, we are planning to replace ILMerge with LibZ but haven’t gotten to it yet.

That’s the progress so far. Some other time-sensitive issues have pre-empted this effort, so it has moved to the back burner, but I’ll post a followup once we get further along in the process.
1. There 3 different ways to restore packages: automatic, command line, and MSBuild-Integrated. Automatic is visual studio integrated. Command line is manual invocation of the nuget executable. MSBuild-Integrated hooks into the next tooling layer down, so you only need to set up one thing, but it has some rather unfortunate side effects. Automatic and command line would get used together since the two of them operate on the same data but are used in different contexts. There is a good documentation on pros and cons of all of these options at the NuGet site.

Motivation to Learn

 

Today’s post is inspired in part by this post on programmers stack exchange. The question is about how to motivate a coworker who isn’t motivated to try new things and isn’t using the best practices for the current language/programming paradigm. Buried in the comments is the nugget “I find [the] biggest issues are the guys who’ve been doing this for 10, 15, 20+ years,” referring to the subset of the senior engineers who, if they are not using the accepted practices, are the hardest to get through to. The implication is that those senior engineers that either haven’t kept up or just never learned, are a bigger issue than the newer (or junior) guys. This makes sense – the newer engineers aren’t particularly invested in any technology or practice, or they may not have completed any big projects at all to see how the technologies worked out.

The senior engineers are the demographic to target to affect technical change, regardless of your role in the organization. After writing it down it seems obvious, but I want to discuss what I see as the interlocking reasons why it’s true. First, the senior engineers are the ones teaching the juniors. Second, the senior engineers are the ones who built most of your current system, and in doing so they defined normal expectations for development. As a corollary to that, anyone else saying that things should be done differently is implying that the senior engineers were doing the wrong thing before. Third, this is what they are used to doing and have become complacent. They’ve been doing it this way for years, and if they have been doing it for years they haven’t given anything newer a chance. Fourth, the senior engineers may have lost their passion for development, since anyone can lose interest in something they’ve been doing. Fifth, would be burnout, this is different from losing passion in that they’ve been pushing too hard, and have let themselves slip on the things they know are good in the long term for a short term win. Let’s walk through these reasons in a little more detail.

That senior engineers are teaching the juniors is again obvious; it is rare to be so lucky as to hire an entry-level engineer that is both super motivated and ready with all sorts of ideas that they want to go out and put into practice. And, even if your senior engineers aren’t teaching technical ideas, they are informally teaching the cultural values of the organization. This could be done via code reviews (or lack thereof) where the values of what is considered “good” are socialized. If a junior doesn’t use a standard library and writes 10k lines of code to do something they could have just pulled in a package to do, and nobody tells them they were wrong then that’s now accepted behavior. That means that to a certain degree, you just set back that junior’s development.

If your current senior engineers built the system, and that system isn’t obviously flawed, they could see that as a validation of whatever they did to build that system. A working system is a validation of anything and everything they did to build it, even if they were the only one utilizing that process or toolset. Success excuses many sins, but you still developed a bad habit doing it. When someone else shows up and starts trying to change things, people will get defensive about they way they were doing it before. They can’t separate the technical criticism from what they are viewing as a knock on their capabilities or insult.

The practices, tools, and resources that the team is currently used to (and their attachment to them) may be the problem. I discussed this before in a post on complacency. If a team isn’t trying something new regularly they will get stuck with a narrow set of tools in their toolbox, and not have the right tool, or even know there is a tool to be looking for. In the context of this question, complacency wouldn’t be not looking for new things to try, but not remaining open to new ideas. Senior engineers should be driving the evaluation and adoption of new tools and techniques, not just following along, but following is still miles better than hindering.

If an engineer lost their passion for building software, you aren’t going to motivate them to learn to build software differently. They won’t want to put in more effort than the minimum, which precludes any real learning. You could find them a different role that better matches their current interests. But it can be difficult for them to acknowledge that, or to find a role that fits their interests and skills.

Burnout is a whole post on it’s own. It’s different from losing passion for software. It’s where engineers don’t care to build this thing, or work with these people at all. They just want to be done with the project or release and get on with their lives. This can be a really nasty issue, since you can move from burnout to complacent or dispassionate easily and get stuck there.

As an engineer you can target other engaged senior engineers to try and spread a technical change. If you can pass on that change to one other senior engineer and have them help champion the change to any other engineers they work with, then you’ve doubled your efficacy. This will help you apply peer pressure to the late adopters among the senior engineers and have the idea seen by more junior developers. None of this will directly help deal with a senior engineer who isn’t engaged, but the idea is to flow around their disinterest. They may not be interested in changing but they may also not be interested in keeping you from doing something.

If they are actively blocking a change, try understanding why they are against the change. You are trying to understand if it’s a resistance to change in general versus resistance specific to the change you are suggesting. If the resistance is specific to a particular change work with what they are telling you. If they are resistant in general, I would suggest to take a step back and consider whether you advocating a best practice (unit testing, SOLID, design patterns, etc) or something newer and less well-accepted (a specific library or tool, a new language or paradigm). If you are advocating for the former keep pushing, try to show the value of the change by using it in other places. If the latter, perhaps you should invest your energy in a more productive way.

Singletons and service locators

Most people are familiar with the Singleton pattern. It was generally implemented and used as a static on steroids. Just calling Singleton.Instance.Foo(bar); meant that you couldn’t replace the implementation. You couldn’t mock it or extend it. This led to a backlash against the Singleton pattern. I always felt that this reputation had been earned unfairly.

I saw the implementation of the pattern as having the goal of there being one instance, but as a side effect of the implementation there is this globally accessible variable. That became the way the object was used. It was pulled in like using a service locator, but without even that little bit of indirection. There was this implicit dependency in the code you wrote on something, and it couldn’t be switched out. This was the big problem in my mind, not the global state.

This is why the Singleton lifestyle is so common in inversion of control containers. It gives you the positive aspects of the Singleton, without that unfortunate dependency. You could use a Singleton like a normal object by hand, but it would require a level of coding discipline that is hard to maintain. The Singleton lifestyle still has the global state, but it is abstracted away cleanly, thereby negating the worst of the associated side effects. Plus with the container you can just change the lifestyle if maintaining the Singleton aspect becomes too onerous for the benefits.

Reconsider the Singleton pattern. It may not be up there with some of the others in usefulness, but it is still a valuable tool in the toolbox.