Skip Navigation
BlackBerry Blog

Secure Engineering: Work With Your Customers

FEATURE / 08.03.17 / T.Rob Wyatt

Ask a dozen CISOs for a definition of secure engineering and chances are you will get back a dozen variations on providing assurance and integrity of the software delivery supply chain. While that's true as far as it goes, my problem with it is that it doesn't go far enough.

I propose that the secure engineering supply chain extends to the Production servers of whoever uses the software to run their business. From this perspective, Independent Software Vendors (ISVs) may need to rethink the division of labor between themselves and their customers when it comes to implementing security, and take on a larger portion of that effort.

You Aren’t in Just One Business

My argument is that an ISV is never just in the software business. Consider the Hannaford supermarkets breach of 2008, which was notable both for the size of the breach and as the first publicly reported breach of data in transit. Although IBM's WebSphere MQ was never implicated in the breach by any of the investigating security firms, a reporter connected the dots of what was known - Hannaford had misconfiguration issues and Hannaford used IBM MQ to move data. Suddenly IBM's unsexy bulk data transport that had been virtually unknown outside specialist circles was in the spotlight, and IBM's software division found itself in the grocery business as far as public perception was concerned.

As an ISV, what businesses might your company suddenly find itself in after a high-profile breach? If the company's reputation rests in part on how successfully customers secure the software they buy from you, then it's in your interest to think of your software delivery supply chain as extending past the sale and all the way to your customers' Production servers.

If this approach is new, then how much of the pipeline, from handoff of the software to Production implementation, is likely to be poorly secured? The more that segment that can be addressed before delivery of software to the customer, the less likely the ISV is to suddenly realize after a breach they are, at least in the public eye, now in the grocery business.

Secure-by-Default

One approach is for the software to adopt a secure-by-default posture. Many software products have been developed and delivered with ‘security optional.’ In other words, it's there if you want to use it, but by default, the product's first goal is to be easy to use.

When customers turn security on and must configure a myriad of controls, laws of probability ensure that at least some customers will get it horribly wrong and leave their systems wide open to a breach, even though they have the best of intentions. The greater the proportion of security engineering that is left to the customer, the greater that proportion of customers who will get it wrong, the more likely a breach.

Adopting a secure-by-default posture will impose some skill training requirements on existing customers and this can be addressed by giving them the means to revert the software to the old behavior. Customers who take the defaults will be much better off and customers who revert to the old behavior will at least be no worse off. That, alone, is a significant win in terms of minimizing chances of a breach and reducing the ISV's reputational risk.

The ultimate goal, of course, is that customers won't disable security and the best way to achieve that is by reducing the skill barrier and the administrative overhead. The closer the incremental cost of securing a node or a component of the software approaches zero, the more likely the customer is to run secure in Production.

A Real-World Example

For example, on a recent engagement I was called into a meeting with the Project Manager and the Operations Manager after they noticed my documentation specified the use of Transport Layer Security (TLS) for all network connections. The original project design did not mention transport security at all. The client assumed plaintext communications by default, so lack of specification to them meant no TLS.

As a security guy, I assume TLS by default unless I'm told otherwise. At the meeting, my clients explained that the company's policy was that TLS was an unjustified cost for internal communications, and that the TLS specification in my documents needed to be removed before people started acting on it.

A bit awkwardly I explained that I'd automated all of the certificate handling and that they'd been running with TLS for six months. It was everywhere and most of the administrators were unaware because the tools I provided took care of setting it up without any human intervention.

As an independent consultant, I can only do this for one customer at a time.  Assuming enough customers are willing to hire me, I might get to 1% of the installed base before I retire. What if the software vendor applied the same techniques I used?

Normal maintenance and upgrade activity would silently deploy TLS across their installed base with almost no impact to their customers and without the need for anyone to disable security. That's a much better outcome and this strategy applies to any commercial software.

Ensuring That Secure Engineering Becomes a Known, Expected Cost in Production

The main hurdle to practicing secure engineering through to Production is not the engineering itself but the funding. When we assume that the software delivery supply chain ends with the software sale, the incentive is to minimize the engineering cost by exposing controls for the customer to use, and hoping for the best. Anything beyond that is often seen as unrecoverable engineering cost and certainly not eligible for funding.

But the moment we recognize the ISV's reputational risk after the sale, a business case to fund engineering to mitigate that risk emerges. In that light, it makes sense to manage some of the reputational risk of deployed installations in-house than to entrust it to thousands of customers with widely varying degrees of skill.

The only question for the ISV is whether the software delivery pipeline stops at the customer's door or extends all the way to Production. That's a question worth asking every time we read about a new breach.

T.Rob Wyatt

About T.Rob Wyatt

Guest Author

T.Rob is an independent security consultant, conference presenter, and author specializing in security of enterprise middleware. He vaguely remembers being able to sleep at night before getting into the business and finding out how the security sausage is made. He believes that he might someday get a good night's sleep again if he can just plug enough security holes. Please do not disabuse him of this notion as he seems to be making some progress and we want him on task. You can find him on Twitter at @DeepQueue and blogging regularly here at Cylance.