Reusable Code: The Mason Jars of Security

Guest editorial by David MortmanIt’s early fall here in Ohio which means it’s time for the second round of canning for the winter. So last weekend my kitchen was covered in bushels of apples and pounds of greens and a whole lot of canning jars. As you know by now, I love to cook and I love a well-designed kitchen tool. Mason jars in particular make me extremely happy. They were invented in 1858 and fundamentally haven’t changed in the subsequent 150 years.

It’s early fall here in Ohio which means it’s time for the second round of canning for the winter. So last weekend my kitchen was covered in bushels of apples and pounds of greens and a whole lot of canning jars. As you know by now, I love to cook and I love a well-designed kitchen tool. Mason jars in particular make me extremely happy. They were invented in 1858 and fundamentally haven’t changed in the subsequent 150 years.

Not only do they just work, but even though there is a huge range in jar sizes from 4 ounces to 1 gallon, there are only two sizes of tops. This means that it’s very easy to manage them not just during the canning process, but for the rest of the year as well.  Also, the jars are long-lasting and completely reusable, except for one small piece of the lid assembly, which much like a good password, should only be used once.

More relevantly, going old school with mason jars reminded me of going old school and using reusable code, and how valuable it can be in a security context. Writing code without vulnerabilities is hard and fixing that code isn’t any easier. This is especially true when you have different code doing the same thing all over your application or even worse, across multiple applications. Not only does this reduce grunt work for your coders (after all who wants to write a new input validation engine every time they have a new form?) but also it allows them to focus on other issues. Also, by reusing code, you stand a better chance of understanding it more completely and as a result up your chances of identifying issues.

The real value though of code reuse is when you take advantage of the ability to put this code into libraries or modules and then call that code as necessary. After all, wouldn’t you vastly prefer to fix a bug in one place in your code and have it fix every instance, rather than having to search through potentially millions of lines of code to fix the problem? So in the case of the input validation problem, the solution is to build a library once and then mandate that it be used everywhere. This approach also has the added advantage of simplifying your overall code structure and the less complex your overall code is, the more secure it is likely to be.

This modularization is not just limited to input validation. It can be used anywhere there is a repeatable task. Case in point: The better designed encryption products have all of the cryptographic functions in separate modules, so you can swap out or update algorithms and hashes as necessary without having to make major changes to the product as a whole.

Code reuse in general and modularization in specific make addressing any future issues much easier even if it means scrapping the library or module completely and inserting a new one, you are still only doing it in one place. It’s much cleaner and makes testing simpler, faster and cheaper. All in all, across the board, a huge cost savings.

David Mortman is a regular contributor to Threatpost and is a contributing analyst at Securosis.

*Mason jars image via Flickr user Average Jane‘s photostream, Creative Commons

Suggested articles

Discussion

  • Lori MacVittie on

    What a great analogy and post. Reusable code, whether via services or libraries or modules, is an efficient way to promote better security and what's better it can reduce the time required to address new vulnerabilities. It's just a great way to deploy shared application functions related to security (or any highly reused application functions, for that matter. SOA and other service-related architectures were just a bit too complex to really take off, that's all.)

    The only caution is that sharing services, like sharing compute resources, shares risk. So if that shared code ends up vulnerable for some reason...propagation across all dependent applications.

    Regards,

    Lori

  • jcran on

    I'm all for reusability & consolidation of functionality (and drinking out of mason jars :), but some food for thought:

    Simple code consolidation / reuse isn't sufficient to create data security. Putting buggy code in a library & referencing that library can /create/ security problems. MS09-035 / The ATL Templates issue comes to mind - http://msdn.microsoft.com/en-us/visualc/ee309358.aspx

    Doesn't better security come from understanding the data flow /through/ the systems, as opposed to hiding functionality?

    Clearly labeled (transparent) mason jars are necessary. The OWASP ESAPI is a good example of this. Consolidation of security-related functionality such as authentication & authorization to a single point in the application should be a goal of code reuse.

    jcran

  • Michael on

    I think that reusing code is a must when it comes to writing secure code. It's imperative that when examining a project from a security standpoint you know precisely where all the db, input, rendering, etc... touchpoints are and don't have to be concerned about any one-offs floating around out there.

  • Alex Carmel-Veilleux on

    I think pretty strongly that reusable code is an order of magnitude harder to right. And an order of magnitude again harder to do well. If the library is too opaque and complicated, it becomes a hiding place for bugs.

Subscribe to our newsletter, Threatpost Today!

Get the latest breaking news delivered daily to your inbox.