Book Chat: Programming in Scala

I had previously mentioned Programming in Scala when discussing Scala for the Impatient, saying that Scala for the Impatient was written as a reaction to the ~800 page bulk of Programming in Scala for people who wanted just enough to get started with. After having read Programming in Scala it feels like criticism of its length is fair. The first half of the book was massive overkill for people who had experience in any C derived language or any other sort of object oriented language. There were sections that were marked off as optional reading  if you were familiar with Java because the behavior being described was similar. In comparison, the second half of the book was a wonderful experience even for an experienced programmer, since there were in depth explanations of all of the advanced language features.

Some of the things I learned were simple. For instance, that regular expressions can be used as extractors, which is a straightforward idea. Or, that predef is implicitly imported everywhere. Or that the arrow operator is actually defined as an implicit conversion in predef and not an explicit part of the language.

Other sections were more complex. The rules for how for expressions get mapped into other syntax were similar to what I had figured out, but the rule about how conditionals and assignments within the expression are evaluated added a lot of clarity to what I had learned by doing. I learned the ways you can use type bounds to improve variance indicators. The authors also discussed the transform method on futures that will be available in Scala 2.12, which has me excited to get to that upgrade.

There were some other things covered that even after a considerable study I’m not sure I understand. I understand the syntax for refinement types but I don’t think I understand the value even after the provided in-depth example using currencies. There was also an in-depth discussion of how the designers arrived at CanBuildFrom in the collections package. CanBuildFrom enables extraction of common operations from many collections but returns a collection of that same type and not some supertype. It makes sense in an abstract sense, but I don’t think I could implement a similar pattern without copying it directly out of the book.

Despite the book’s heft, there were a couple of topic I would have liked to know more about. I was hoping for a discussion of the reflection capabilities provided by manifests, type tags, and class tags, but since they are just library pieces and not integral to the language they weren’t covered. There were some oblique references to how bytecode gets generated from various Scala structures, but I was hoping for more insight into how to make interfaces that are less susceptible to breaking changes under the hood even when the Scala side looks fine.

Overall it’s a good read and not as long to read as you would think a book this size would be. It’s easily divided up into small sections so you can easily sit down and read a page or two and make progress over time.

Advertisements

Book Chat: How To Solve It

How To Solve It isn’t a programming book. It’s not exactly a math book either, but you will find yourself doing geometry while reading it. It isn’t a book on logic, but it is all about structured thought processes. I would describe it as a manual to teaching a systematic approach to problem solving to others, using geometry and a series of examples. It tries to lay out all of the thoughts that whiz through your head when you see a problem and understand how to solve it without really contemplating how you knew it. It’s a fast read, assuming that you know the geometry he uses in the examples.

The problem solving process is broken into four basic steps: understanding the problem, devising a plan, carrying out the plan, and looking back. At first it seems obvious, but that’s thing about a structured approach, you need to cover everything and be exhaustive about it. For example, to understand the problem you identify the unknown, identify the data, identify what you want to accomplish, try to draw a picture, introduce suitable notation, and figure out how to determine success. If you wanted to know should you buy milk at the store this sort of formal process is overkill, but if you are struggling with a more complex problem like trying to figure out what’s causing a memory leak or setting up a cache invalidation strategy it might be valuable to structure your thoughts.

I haven’t had a chance to apply it to a real problem yet. I did use some of the teaching suggestions – how to guide the pupil to solve their own problems – with one of the junior engineers I mentor and it seemed productive. I got him to answer his own question, however not enough time has passed to see if it improves his problem solving abilities in the future.

Overall the book was an interesting experience to read and seems practically applicable to the real world.

Media Diet

Last week I wrote about how I really tackled my imposter syndrome by reaching out into the wider community. It helped me feel like I was making progress outside of whatever was going on at work. I wanted to share the resources I use to find new ideas and keep up my continuous learning.

Blogs

Podcasts

 

This may seem like a lot of stuff, but most of the podcasts publish once a week, and blogs are generally less frequent than that. I generally try to get to a meetup or two a week on top of this. The whole diet helps me feel more informed and in touch with a software community outside work.

Imposter Syndrome Meetup

I was at a local meetup about imposter syndrome this week and it made me remember how far I have come in my own career. The speaker talked about his journey and the times he felt like an imposter, even though he had the sorts of experiences that would make most engineers jealous. I want to talk through my own background and about how I managed to come to grips with my own professional insecurities. Hopefully this will inspire others to have more confidence in themselves.

I remember at my first job how little I felt like I was learning and how little it seemed my coworkers around me knew. When I went to interview for my next job a couple of years later I was highly nervous that I was behind the curve, and I wasn’t sure that I entirely understood how real-world software engineering was meant to work. When I got the job, I told myself that I got lucky that the interview was devised by a bunch of alumni from my college, so it covered the kinds of questions I had seen in school.

At that job the imposter syndrome kicked in immediately. I was afraid that they had certain expectations of me based on my experience level, where I felt like I hadn’t progressed past what I had learned in school. I thought that I was behind the curve on version control practices, and I hadn’t gained any exposure to any sort of real domain modeling or object oriented programming. I knew these skills were going to be important at this job and had assumed that they were things they could expect me to know when I walked in the door. There was all of the domain information that goes with a new job as well, and at this company it was literal rocket science so I couldn’t really slouch on that aspect either. The first few months were definitely rough, I had a couple of days where I spent all day fighting with basic ideas and couldn’t get anything to work, which made me feel like I didn’t deserve to be there. It eventually got better as I gained experience with the domain and the technology. I gained confidence after running down a couple of very gnarly bugs and getting praised for a creative solution to an awkward problem. Ultimately though, my anxiety was misplaced. It turned out that my managers never had expected me to walk in the door with these skills, they had picked me because they were happy with what I knew already.

Sadly the company ran into hard times and I got laid off, but this paradoxically resulted in big confidence boost. When I got back to my desk from hearing the bad news from my boss I had a ringing phone from a former coworker to schedule an interview with his new company. All that time I had thought that I was barely getting by, he’d thought I was doing fantastic work. In the two weeks between then and my last day at that job I got another offer as well which helped boost my confidence.

I took my former colleague’s offer; I’m sorry to say it ended up not being a great culture fit for me. But by now, the increase in confidence meant I was more willing to take a chance and make a move, looking for something that would be more of a challenge. I took a position in the same domain as an expert to help salvage a failing software project. This job was a good fit on paper for me, since I had experience on both the domain and the technical stack. I was confident going in and was initially given a lot of latitude to do what needed to be done, which was great technical experience. But, it had me doing a lot of more management style activities than I wanted to do, which was an area that I also felt I didn’t really have the right skills and experience. Then after getting the software stabilized and finding a order of magnitude performance improvement, the reward was to bog down the entire project in a mountain of process. So, while I’d gained confidence in my own abilities and standing when it came to technical issues, I was circling back to feeling like an imposter in this new process/management role.

My discomfort resulted in me moving to a small startup to help anchor their development team. The environment was very unstructured and goals changed week to week. I was immediately being asked to give expert opinions on technologies I had never worked with before. The situation was stressful because I felt like I wasn’t qualified to give these opinions, but it wasn’t clear whether they had anyone more qualified. On one hand I was faking it in that I didn’t know a lot of what I was talking about, but on the other hand I took the initiative to learn a lot about these new technologies. Really, I learned how to learn about technologies. The few technology decisions I made during my time there all seemed to work out fine, but I don’t know how that compares to having made other choices and I wasn’t there long enough to see the long term outcomes. Even now, I still find myself downplaying the difficulty of the work I did, still feeling like I was just a pretender.

My next job was at a large tech company, and it was an eye opening experience. This was the first ‘normal’ web application I had worked on since my first job and I was worried that I was out of practice. Since I had so many more years of experience than the last time I worked on web applications, I assumed the expectations for me would be higher than I could meet. I was worried that I would show up and not know how to do anything and would be summarily fired. This turned out not to be the case, but the impression I had going in impacted my ability to leverage myself to accomplish anything. My assumption that I wouldn’t be able to contribute right away meant I stayed quiet about areas where I could have made improvements to benefit the company; I let mediocre practices I witnessed linger way too long before trying to change them.

Despite the good work I did there, my inability to change the culture and other improvable development practices really hurt my confidence about what I could achieve in this environment. This, combined with the lack of knowledge around building web applications, pushed me to do anything and everything I could do to try and grow more. I put a concerted effort into getting out into the local development community to try and find a broader sense of inspiration. This was the time period when I started writing this blog as well. I started attending a number of local meetups and listening to various podcasts. Talking to so many new people who shared my struggles helped me understand that others don’t know some magical trick that I don’t. And, it made me realize that learning how to learn was one of the most important things I had achieved. For me, moving on from imposter syndrome has been about accepting that I don’t know everything I wish I did on a topic, but neither does anyone else, it’s all about our willingness and ability to learn and improve.

This all culminates with my current position where I changed tech stacks to stuff I had never used at all before. My specific experiences weren’t immediately relevant to this new technology stack, but I did bring a lot of thoughts on doing unit testing, domain modeling and other good technical practices. Since this was my fourth stack in 12 years as a professional I had a fair idea about how to pick up a new stack and leverage what I did know to learn new things. There are still lots of things I don’t know, but I managed to get enough together to know how to ask reasonable questions and to apply the concepts from other stacks. I am still at points concerned that I don’t know enough about certain topics but I have become become fearless about asking questions and unafraid of looking uninformed. This question asking seems to have helped one of the junior engineers on my team to have the confidence to ask questions in pull requests when he doesn’t understand what’s going on. That sort of safe space amongst the team is the sort of environment that I want to be in and having accepted my own lack of knowledge on some fronts has empowered those around me to find a better way for themselves.

Seven More Languages in Seven Weeks

Seven More Languages in Seven Weeks is a continuation of the idea started in Seven Languages in Seven Weeks that by looking at other languages you can expand your understanding of concepts in software engineering. While you may never write production code in any of these languages, looking at the ideas that are available may influence the way you think about problems and provide better idioms for solving them.

This installment brings chapters on Lua, Factor, Elixir, Elm, Julia, MiniKanren, and Idris. Each of these languages is out on the forefront of some part of software engineering. Lua is a scripting language with excellent syntax for expressing data as code. Factor is a stack-based programming language with interesting function composition capabilities. Elixir is Ruby-like syntax on the Erlang VM. Elm is reactive functional programming targeting javascript as an output language. Julia is technical computing with a more user friendly atmosphere, and good parallelism primitives. MiniKanren is a logic programming language and constraint solver built on top of Clojure. Idris is a Haskell descendent bringing in the power of dependent types to provide provably correct functional code.

Overall it was an interesting survey of the variety of programming languages. Some I had done a bit with before (Lua, Elixir) some I had heard of before (Elm, Julia, and Idris) and some I hadn’t even heard of (Factor and MiniKanren). Each chapter was broken into three ‘days’ indicating a logical chunk of the book to tackle at once. Each day ends with a series of exercises to help make sure you understand what’s being presented.

Since these languages are out on the edge of the world in programming terms, they are evolving fairly quickly. This ended up biting the Elm example code particularly hard since large portions of it have been deprecated in the releases since then and they didn’t work on the current runtime. Compared to the lineup from the original book (Clojure, Haskell, Io, Prolog, Scala, Erlang, and Ruby) you’ve got a much broader variety of languages in the sequel, but nothing with the popularity of Ruby or the legacy install base of Erlang. Since this was written in 2014, none of these have had a massive breakout in terms of popularity and adoption, however they do seem to do well in terms of languages people want to work with.

Overall it’s an interesting take on where things could be going.  I don’t think most of the languages covered have significant mainstream appeal right now. Two of these languages seem to be more ready for the primetime than the others. Julia definitely has a niche where it could be successful. I feel like the environment is ripe for something like Elm to surge in popularity since frontend technology seems to be going through constant revisions.

Java Containers on Mesos

I recently ran into an interesting issue with an application running in a container. It would fire off a bunch of parallel web requests (~50) and sometimes would get but not process the results in a timely manner. This was despite the application performance monitoring we were using saying the CPU usage during the request stayed very low. After a ton of investigation, I found out a few very important facts that contradicted some assumptions I had made about how containers and the JVM interact.

  1. We had been running the containers in marathon with a very low CPU allocation (0.5) since they didn’t regularly do much computation. This isn’t a hard cap on resource usage of the container. Instead it is used by Mesos to decide which physical host should run the container and it influences the scheduler of the host machine. More information available on this in this blog post.
  2. The number of processors the runtime reports is the number of processors the host node has. It doesn’t have anything to do with a CPU allocation made to the container. This impacts all sorts of under the hood optimizations the runtime makes including thread pool sizes and JIT resources allocated. Check out this presentation for more information on this topic.
  3. Mesos can be configured with different isolation modes that control how the system behaves when containers begin to contest for resources. In my case this was configured to let me pull against future CPU allocation up to a certain point.

This all resulted in the service firing off all of the web requests on independent threads which burned through the CPU allocation for the current time period and the next. So then the results came back and weren’t processed. Immediately we changed the code to only fire off a maximum number of requests at a time. In the longer term we’re going to change how we are defining the number of threads but since that has a larger impact it got deferred until later when we could measure the impact more carefully.

Book Chat: The Pragmatic Programmer

For a long time this had been on my list of books to buy and read with a “note to self” saying to check if there was a copy of it somewhere on my bookshelf before buying one. It felt like a book I had read at some point years ago, but that I didn’t really remember anymore. Even the woodworking plane on the cover felt familiar. It felt like it was full of ideas about creating software that you love when you encounter them but are disappointingly sparse in practice. Despite being from the year 2000 it still contains a wealth of great advice on the craft of creating software.

Since it is about the craft of software, not any specific technologies or tools or styles, it aged much better than other books. That timeless quality makes the book like a great piece of hardwood furniture, it may wear a little but it develops that patina that says these are the ideas that really matter. There is an entire chapter devoted to mastering the basic tools of the trade: your editors and debuggers, as well as the suite of command line tools available to help deal with basic automation tasks. While we’ve developed a number of specialized tools to do a lot of these tasks it is valuable to remember than you don’t need to break out a really big tool to accomplish a small but valuable task.

It’s all about the fundamentals, and mastering these sorts of skills will transfer across domains and technical stacks. It was popular enough that is spawned an entire series of books – The Pragmatic Bookshelf – and while I have only written about one of them I have read a few more and they’ve all been informative.

About two-thirds of the way through the book I realized that I had indeed read it before – I had borrowed a copy of it from a coworker at my second job. He had recommended it to me as a source he had learned a lot from. I remember having enjoyed it a lot but not really appreciating the timeless quality. Probably since that would have been around 2007, it wouldn’t have seemed as old, especially since things seemed to be moving less quickly then. Maybe I just feel that way since I didn’t know enough of the old stuff to see it changing.

If you haven’t read it, go do it.

tumblr_inline_o2aushqfpx1slrvm0_1280

Being a Wizard

A somewhat obscure question got asked in a chat channel at work that I knew the answer to, which helped out some other engineer. The question wasn’t anything that abnormal – it was about a weird error message coming from an internal library. Searching through the library’s code wasn’t immediately helpful since the unique part of the error message didn’t appear in the code. The reason I knew the answer wasn’t because it was easy, but because I had spent an hour investigating it the day before.

Sometimes when you see someone have an apparently impressive insight, that doesn’t necessarily mean they are better than you, they may just have had an experience which makes the answer obvious to them. This applies to all sorts of other technical activities. During the Hackathon I did a similar thing. One of the other devs on the team was integrating the portion of the code I was working on and having trouble. It was immediately obvious to me why, because I had put in the time earlier to figure it out the hard way. Your mind is a powerful pattern matching system. It immediately recognizes this:

 

happycatOr thisftc

 

If you think back to when you first started learning calculus, the terminology and symbols of it were complicated and foreign, but after a while you gained a certain familiarity with them and after a while they became second nature.

You may go to work and make some business web app in one particular technology stack, but there are all sorts of concepts that go with it that aren’t the business or the tech stack. You’re synthesizing things like design patterns, test driven development, RESTful web services, algorithms, or just the HTTP stack and everything that goes with that. These are all the transferable skills that can help you “cast a spell” and jump past a problem.

When I sat down to learn Scala, it wasn’t that big a task since most of the language features had equivalents I was familiar with in other languages. That let me skip forward to the nuances of those implementations and the few language features I was less familiar with. Getting experience with those ideas in the abstract let me appear as a wizard going forward since I jumped ahead on the learning curve and look the wizard. Some of the common feelings of impostor syndrome are the worry to be found out like another wizard.

wizard_behind_the_curtain

Hackathon

At work we recently held a hackathon where everyone who was interested in participating had 24 hours to build whatever they thought would be interesting and useful. We had 21 teams across 4 offices with ~60 participants total. I had never done a hackathon before and this seemed interesting so I registered for it without a particular idea in mind. As the start rolled around I was planning to put together a tool to simplify how we were generating configurations for containers in marathon. However, at the kickoff pizza dinner I heard another developer saying he had a plan to solve our issues concerning the lack of available conference rooms. Every afternoon there would be an hour or two where there were no meeting rooms left available, and while we’ve got more space coming it will be a while until it is available. Having been a victim of this problem before I asked if they could use another hand on their team and was graciously invited onboard.

The key insight they had was to use the logs from the wireless network to figure out who was in what office each day. Once we had an idea of who was in what office that day we could cross reference that with their calendar appointments and see what rooms were booked and didn’t need to be. There was some concern about false positives, i.e.,  we didn’t want to have the system saying that you weren’t in the office by 10 so release your room at 11 while you were stuck in traffic. So we built a hipchat integration to check with you about it.

The three of us started Thursday night at about 6 with a general divide and conquer along the three major components: data mining/parsing, calendar matching and decision, and the hipchat integration. I mostly worked on the hipchat portion. Since the bot had to reach out to specific people on it’s own volition as opposed to responding to people or messaging a fixed channel, our needs were different than what most of the prebuilt hipchat integrations are doing. I ended up doing an XMPP integration using Smack. The biggest challenge in getting this working in the context of a web service was that I needed to keep the connection to hipchat open longer than the API implied it needed to be. I found this out when my initial attempt to send a message and then close the connection failed because the message didn’t finish going through but we had closed the connection on our end. After spending several hours working through that I called it a night at about 1:30 a.m. and headed home to catch some sleep.

Getting back the next morning at about 7:30, in my office there was one lone developer who had been there working on his project all night. He had been working on porting a feature from the web app to the android app, because when he used the app he wanted that feature. I spent the first part of the morning working on getting the response from hipchat hooked up and found another interesting problem. I wasn’t able to respond to myself as the bot for whatever reason. So if the bot was using my credentials to send messages it wouldn’t see my response to it. I suspect it was because hipchat was being clever and not sending the message as a message but some sort of history, but I never was able to confirm. At 8:30 the dev who had been working on the matching stuff for our project got in and started processing live data for that day; our app immediately started spitting out rooms we thought didn’t need to be booked. I went and did a little scouting at about 10:30 to confirm the situation and matching seemed right.

We ran into a credentials snag on getting an account with the rights to unreserve other people’s meetings. So we didn’t have a full demo but the example meetings we had identified painted a pretty picture of how well it could work and the number of rooms it could free up.

When demo time rolled around we all got together to show off what we built. There was a bunch of interesting stuff put together. There was a set of living visualizations of service dependencies built by parsing urls from the system configuration data. There was a port of one of our mobile apps to Apple Watch. There were two different teams that built Alexa integrations for different portions of our products. Several teams built features for various mobile apps. One team set up a version of Netflix’s Chaos Monkey in the load test environment, including a hacked Amazon Dash button that would kill a server in that environment at the push of a button. Another team built a deploy football in the vein of the nuclear football complete with keys and switches and a little screen to display progress. Two tech writers twisted arms and got someone to build a hipchat integration to look up acronyms from a glossary they had put together on the wiki.

Overall I had a blast but ended up pretty exhausted from the ordeal. Some prizes will be given out on Monday. I’m not sure of the exact criteria for them but I wasn’t competing at all – I was enjoying the latitude to do what I thought was best. There will be one more prize given out at the company-wide all hands wherein everyone gets to vote on as the most impactful project after we get a chance to see how everything  turns out in real usage.

Write the Code You Want

“Write the code you want and then make it compile” was a thought expressed on library design while I was at the NE Scala Symposium. It is a different way to describe the TDD maxim of letting the usage in tests guide the design. It is very much influenced by the extremely flexible syntax rules and DSL creation abilities in Scala. One of the talks, Can a DSL be Human? by Katrin Shechtman, took a song’s lyrics and produced a DSL that would compile them.

Since you can make any set of arbitrary semantics compile, there is no reason you can’t have the code you want for your application. There is an underlying library layer that may not be the prettiest code, or may be significantly verbose but you can always make it work. Segregating the complexity to one portion of the code base means that most of the business logic is set up in a clean fashion and that the related errors can be handled in a structured and centralized fashion.

Taking the time to do all of this for a little utility probably isn’t worth it, but the more often a library is used the more valuable this becomes. If you’ve got a library that will be used by hundreds, really refining the interface to make it match how you think would be really user friendly.

Building software that works is the easy part, building an intuitive interface and all of the comprehensive documentation so others can understand what a library can do for you is the hard part. I’m going to take this to heart with some changes coming up with a library at work.

This still doesn’t even cover the aspect of deciding what you want. There are different ways you can express the same idea. The difference between a function, a symbolic operator, or create a DSL can all express the same functionality. You can express the domain in multiple ways, case classes, enums, or a sealed trait. You can declare a trait, a free function, or an implicit class. Deciding on the right way to express all of this is the dividing line between a working library and a good library.