Who is the Customer for Security?

When looking at an application from a security perspective I can assess the controls involved and try to understand the threats it faces. But then comes the question, is this application secure? You can look at an application and declare it insecure easily enough. Finding a flaw in the design of the controls or the implementation makes an application insecure. Essentially all applications appear to have vulnerabilities according to this 2017 report. When you have the keys to the kingdom to protect you can go to significant lengths to protect them, like AWS does with its provable security measures.

For those of us with more limited means, we need to find a way to determine where is the line for what we should do. There isn’t a clear line for what’s good enough, a risk based approach makes too many guess, and an absolute approach doesn’t scale with organizational size. You build a system that looks secure to the best of your knowledge and monitor it to make sure it stays that way. You counter new known vulnerabilities as they come up. You can try to be proactive with scanning; you can try to put yourself in a good place to be reactive to issues that come up. You can build defenses in depth and apply any sort of new emerging technologies to the problem, but if the only restriction to how much work you put into security is how much money you have, how do you decide when you have enough?

Some searching turned up this formula

ROI=(Average loss expected / cost of countermeasures)

Conceptually it makes sense, but what is the average loss expected? The article defined it as (Number of Incidents per Year) X (Potential Loss per Incident). That’s still a guess at best, if you experienced some number of incidents in the past you can project forward. On the loss side you have reputational damage from the incident, the cost of response, any regulatory fines. So at this point you have a guess times a guess and you estimate how any action you take will impact those two guesses to determine if it is worth doing or not. That doesn’t seem like a good way to measure if you are making an impact.

I’ve been trying to think through other ways to analyze this sort of position. I keep coming around to something similar to downtime analysis. How confident are you in understanding your security and how long does it take you to react to something that needs to be remediated once you’ve identified it? This formulation gives you a different way to scope your security activity. It brings a number of different things as security related that wouldn’t make sense under the earlier formulation. Things like the scalability of your build and deployment pipeline are a security concern, because if you need to build and redeploy hundreds of services after finding out about a vulnerability in a commonly used library it would be good not to bottleneck the system.

I think this metric makes sense from a more technical perspective of the security of the system compared to the incidents and losses perspective which makes more sense from a business risk perspective. Having this additional perspective in a

Advertisements

OWASP Dependency Check

The Open Web Application Security Project (OWASP) provides a number of tools and resources to help create more secure web applications. Their dependency check tool is designed to integrate into the build step of your application and tell you if any of your downstream dependencies have known security issues. The idea is a simple but powerful one – when your application is being built it pulls in a number of libraries and those libraries pull in even more transitive dependencies. You need to look at all of them and cross-reference the National Vulnerability Database to see if any of them have known issues. While you may know to keep an eye out for announcements of security findings from libraries you use directly, but you may not easily know the transitive dependencies your system expects. It’s like a more traditional vulnerability scanner, but rather than pointing it at existing servers to find those that are vulnerable, you put it into the build pipeline to find those you are about to make vulnerable.

The tool has integrations for a number of different popular build systems for the JVM and .NET. It also has experimental support for Ruby, Python, C++, and Node. This predates the github vulnerability scanner but works in a similar way. Since I’m working with Scala day to day using SBT as my build system it isn’t supported by github. I’ve been using an SBT plugin to start integrating it into build systems in various services and libraries. The report generated provides both the severity of the vulnerability and the likelihood that it matched the vulnerability to the binary. So far the likelihood has always been correct in determining whether the binary matches the one indicated to be vulnerable, but I’m still not confident in that since most of my repos were using similar dependency sets and thus generating similar reports.

Taking an afternoon to setup the scanner, run it against a variety of repos you work with, and look through the findings seems highly worthwhile. If you’ve been keeping up with your library upgrades you likely won’t find anything earth shattering. But if you have a dependency that isn’t keeping up to date then you might be in for an interesting surprise.

Book Chat: Securing DevOps

Securing DevOps is an introduction to modern software security practices. It both suffers and succeeds from being technology- and tool- agnostic. By not picking any particular technology stack it will remain relevant for a long time, however it is not a complete solution for anyone since it gives you classes of tools to find but not a complete package for software security. If you need to start a software security program from zero this lays out a framework to get started with.

While I’ve only been doing software security full time for a few months now, I feel like the identification of the practices to engage isn’t the hard part, it’s the specifics of the implementation where I feel I want additional guidance. I know I should be doing static analysis of the code as part of my CI pipeline, but I don’t know how to handle false positives in the pipeline or what is worth failing a build because of. I don’t know what sort of custom rules I should be implementing in the scanner for my technology stack.

The book did go further into detail on the subject of setting up a logging pipeline. It describes how to set up rules to look for logins from abnormal geographic locations and how to look for abnormal query strings. The described logging platform is nothing abnormal for a midsized web application, however, I don’t know if you could have a small organization and have this level of infrastructure setup. Hooking up the ELK stack, while open source, is not easy, and the kibana portion requires a fair bit of customization and time to get everything together and working.

It feels as though we are missing a higher level of abstraction for dealing with these concerns. Perhaps, the idea that most software applications should have to go through this level of effort to get ‘standard’ security setup for a web application is reasonable. Even on the commercial tools side there seems to be a lack of complete solutions. Security information and event management (SIEM) tools try to provide this, but they each still require significant setup to get your logs in and teach the program how to interpret them. It feels like some of this could be accomplished by building more value in a web application firewall (WAF). WAFs were not fully endorsed by the book due to the author having had a bad experience with a bad configuration problem. Personally, I think a WAF seems necessary to protect against distributed denial of service style attacks.

Overall the book is an introductory to intermediate text, not the advanced practices I was looking for. If you’re bootstrapping an application security program this seems like a reasonable place to get started. If you’re trying to find new tactics for your established program, then you’ll probably be disappointed.

BadSSL.com

I ran across badssl.com recently, and needed to share. The basic idea of the site is that it hosts a number of subdomains with all sorts of variants of SSL certificates. The example certificates cover the whole range of things that can go wrong with a certificate, including expiration, self signed certs, revoked certificates, and certificates for the wrong host. It also checks the strength of cryptography being used and has certificates specifying multiple different kinds of encryption to be tested against. This is all so you can see that your browser is securing you properly.

There is a more interesting use case however. When you go over to the associated github repo there are instructions for booting up the site locally inside a docker container so you can test your code against it as part of your automated test suite to test all sorts of other networking code outside of a browser. The container hosting a separate copy of the site avoids putting your integration tests in a path where they reach out to the public internet for resources. Having your integration tests work with public resources on the internet isn’t a good practice for a number of reasons, such as the time it takes to round trip, the dependency on someone else’s infrastructure for your processes, and just being inconsiderate of someone else’s resources. But, this container lets you avoid all of the work associated with defining what certificates are needed, generating the various certificates, and installing all of certificates.

The test case we used the certificates for didn’t turn up any bugs, but it did make us confident in the implementation. This confidence helped us move along more quickly and be sure we were appropriately securing the connections.

Book Chat: Working Effectively With Unit Tests

Working Effectively With Unit Tests is a discussion not of when to unit test or how to unit test, but how to know when you’ve done it well. It works backwards from the idea that tests should be Descriptive And Meaningful Phrases(DAMP) as opposed to the traditional software pneumonic Don’t Repeat Yourself (DRY). By allowing some duplication in tests and focusing on the clear intention of what is to be accomplished you get tests that are easier to read and tests that are more focused on the object under test rather than the collaborators of the test.

The style being described forces out a lot of the elaborate mock setups common in most first attempts at unit testing. This is a definite good intention, however like most resources, I feel it comes short at describing a means to actually get rid of these sorts of problems in real applications, as opposed to toy applications in books and articles. The ideas it provides do work towards those ends admirably. To me, the ideas presented seem to drive towards a more functional style of programming; methods were getting more arguments which made the methods more flexible, and the objects they lived on were less prone to carrying around extraneous state. The book didn’t discuss this in functional programming terms, but sort of implied that was a goal around the edges.

Compared to some of the other books on unit testing I’ve read, this felt more concise, and it was definitely less focused on a specific framework for doing testing. It feels written for someone who has been doing unit testing for a while and has not been getting value from the activity, or has been having maintainability problems with tests. For those audiences it seems like it is a good perspective towards trying to get out of their problems. For people new to unit testing, it may be a little to broad in what you should do and not prescriptive enough.